Artificial Neural Network Bank Customer Churn Prediction Model

The objective of this blog is to design a Neural Network Model to predict Bank Customer Churn. Moreover is to predict if customers will remain with a bank or if they will opt out of banking services in the next 6 months. The nature of this model will be classification.
Data Description:
The case study is from an open-source dataset from Kaggle.
The dataset contains 10,000 sample points with 14 distinct features such as CustomerId, CreditScore, Geography, Gender, Age, Tenure, Balance etc.
Link to the Kaggle project site:
https://www.kaggle.com/barelydedicated/bank-customer-churn-modeling
The input dataset contains 14 columns, out of which 13 are used as Independent features, and the last one is the Dependent feature.

Abstract

Customer churn is a major problem of customers leaving your products/subscription and moving to another service. Due to direct effect on profit margins, businesses now are looking to identify customers who are at the risk of churning and retaining them by personalized promotional offers. In order to retain them, they need to identify the customers as well as the reason of churning so that they can provide the customers with personalized offers and products. The aim of our project is to solve this problem for banking domain, by identifying which customers are at risk of churning and what are the reasons for churning with the help of data mining and machine learning algorithms. The project focuses on 2 deliverables - Predict customers likely to churn using supervised learning classification algorithms and customer segmentation of customers using unsupervised learning to validate the similarities in the ‘likely to churn’ customer subset to come up with different segments. The reasons for a particular customer churn can vary from internal factors as well as external factors but we will try to understand the reasons of churning depending on internal factors using explainable AI, which breaks into the blackbox of machine learning algorithms and gives a clear explanation of the predictions.

Methodology

• Analyze the underlying distribution of the number of users who are about to leave the subscription and perform customer segmentation on the likely, to churn customers.

• Identified the reasons which help in targeted marketing and generated cluster analysis details with explainable AI.

In [1]:
pip install tensorflow
Requirement already satisfied: tensorflow in c:\users\d.zografos\anaconda3\lib\site-packages (2.3.1)
Requirement already satisfied: opt-einsum>=2.3.2 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (3.3.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (1.1.2)
Requirement already satisfied: wheel>=0.26 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (0.34.2)
Requirement already satisfied: protobuf>=3.9.2 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (3.13.0)
Requirement already satisfied: astunparse==1.6.3 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (1.6.3)
Requirement already satisfied: numpy<1.19.0,>=1.16.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (1.18.1)
Requirement already satisfied: grpcio>=1.8.6 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (1.32.0)
Requirement already satisfied: google-pasta>=0.1.8 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (0.2.0)
Requirement already satisfied: gast==0.3.3 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (0.3.3)
Requirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (2.3.0)
Requirement already satisfied: six>=1.12.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (1.14.0)
Requirement already satisfied: tensorboard<3,>=2.3.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (2.3.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (2.10.0)
Requirement already satisfied: wrapt>=1.11.1 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (1.11.2)
Requirement already satisfied: absl-py>=0.7.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (0.10.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorflow) (1.1.0)
Requirement already satisfied: setuptools in c:\users\d.zografos\anaconda3\lib\site-packages (from protobuf>=3.9.2->tensorflow) (45.2.0.post20200210)
Requirement already satisfied: markdown>=2.6.8 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow) (3.2.2)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow) (1.7.0)
Requirement already satisfied: werkzeug>=0.11.15 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow) (1.0.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow) (1.22.0)
Requirement already satisfied: requests<3,>=2.21.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow) (2.22.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\users\d.zografos\anaconda3\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow) (0.4.1)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in c:\users\d.zografos\anaconda3\lib\site-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow) (1.7.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.5" in c:\users\d.zografos\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in c:\users\d.zografos\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (4.1.1)
Requirement already satisfied: aiohttp<4.0.0dev,>=3.6.2; python_version >= "3.6" in c:\users\d.zografos\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (3.6.2)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (1.25.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow) (1.3.0)
Requirement already satisfied: zipp>=0.5 in c:\users\d.zografos\anaconda3\lib\site-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow) (2.2.0)
Requirement already satisfied: pyasn1>=0.1.3 in c:\users\d.zografos\anaconda3\lib\site-packages (from rsa<5,>=3.1.4; python_version >= "3.5"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (0.4.8)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from aiohttp<4.0.0dev,>=3.6.2; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (1.6.0)
Requirement already satisfied: async-timeout<4.0,>=3.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from aiohttp<4.0.0dev,>=3.6.2; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (3.0.1)
Requirement already satisfied: attrs>=17.3.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from aiohttp<4.0.0dev,>=3.6.2; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (19.3.0)
Requirement already satisfied: multidict<5.0,>=4.5 in c:\users\d.zografos\anaconda3\lib\site-packages (from aiohttp<4.0.0dev,>=3.6.2; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (4.7.6)
Requirement already satisfied: oauthlib>=3.0.0 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow) (3.1.0)
Requirement already satisfied: typing-extensions>=3.7.4; python_version < "3.8" in c:\users\d.zografos\anaconda3\lib\site-packages (from yarl<2.0,>=1.0->aiohttp<4.0.0dev,>=3.6.2; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (3.7.4.2)
Note: you may need to restart the kernel to use updated packages.
In [2]:
pip install keras
Requirement already satisfied: keras in c:\users\d.zografos\anaconda3\lib\site-packages (2.4.3)
Requirement already satisfied: numpy>=1.9.1 in c:\users\d.zografos\anaconda3\lib\site-packages (from keras) (1.18.1)
Requirement already satisfied: scipy>=0.14 in c:\users\d.zografos\anaconda3\lib\site-packages (from keras) (1.4.1)
Requirement already satisfied: h5py in c:\users\d.zografos\anaconda3\lib\site-packages (from keras) (2.10.0)
Requirement already satisfied: pyyaml in c:\users\d.zografos\anaconda3\lib\site-packages (from keras) (5.3)
Requirement already satisfied: six in c:\users\d.zografos\anaconda3\lib\site-packages (from h5py->keras) (1.14.0)
Note: you may need to restart the kernel to use updated packages.
In [3]:
pip install chart-studio
Requirement already satisfied: chart-studio in c:\users\d.zografos\anaconda3\lib\site-packages (1.1.0)
Requirement already satisfied: requests in c:\users\d.zografos\anaconda3\lib\site-packages (from chart-studio) (2.22.0)
Requirement already satisfied: plotly in c:\users\d.zografos\anaconda3\lib\site-packages (from chart-studio) (4.9.0)
Requirement already satisfied: retrying>=1.3.3 in c:\users\d.zografos\anaconda3\lib\site-packages (from chart-studio) (1.3.3)
Requirement already satisfied: six in c:\users\d.zografos\anaconda3\lib\site-packages (from chart-studio) (1.14.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests->chart-studio) (1.25.8)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests->chart-studio) (2020.6.20)
Requirement already satisfied: idna<2.9,>=2.5 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests->chart-studio) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\users\d.zografos\anaconda3\lib\site-packages (from requests->chart-studio) (3.0.4)
Note: you may need to restart the kernel to use updated packages.
In [4]:
import numpy as np
import keras
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
from sklearn import metrics
from sklearn.metrics import accuracy_score, confusion_matrix, recall_score, precision_score, f1_score, auc
import warnings
warnings.filterwarnings('ignore')
In [5]:
import tensorflow as tf
print(tf.__version__)
2.3.1

1. EDA & Data Preprocessing

1.1. Read the dataset

In [6]:
#importing the dataset, set RowNumber as index
#load the csv file and make the data frame
df = pd.read_csv('Churn_Modelling.csv',index_col='RowNumber')
In [7]:
df.shape # 10,000 rows, 13 columns
Out[7]:
(10000, 13)
In [8]:
df.head(2) #Exited is target column
Out[8]:
CustomerId Surname CreditScore Geography Gender Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
RowNumber
1 15634602 Hargrave 619 France Female 42 2 0.00 1 1 1 101348.88 1
2 15647311 Hill 608 Spain Female 41 1 83807.86 1 0 1 112542.58 0
In [9]:
#Check datatypes
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000 entries, 1 to 10000
Data columns (total 13 columns):
 #   Column           Non-Null Count  Dtype  
---  ------           --------------  -----  
 0   CustomerId       10000 non-null  int64  
 1   Surname          10000 non-null  object 
 2   CreditScore      10000 non-null  int64  
 3   Geography        10000 non-null  object 
 4   Gender           10000 non-null  object 
 5   Age              10000 non-null  int64  
 6   Tenure           10000 non-null  int64  
 7   Balance          10000 non-null  float64
 8   NumOfProducts    10000 non-null  int64  
 9   HasCrCard        10000 non-null  int64  
 10  IsActiveMember   10000 non-null  int64  
 11  EstimatedSalary  10000 non-null  float64
 12  Exited           10000 non-null  int64  
dtypes: float64(2), int64(8), object(3)
memory usage: 1.1+ MB

Surname, Gender and Gepgraphy are Object type

In [10]:
#Check for missing values
df.isna().sum()
Out[10]:
CustomerId         0
Surname            0
CreditScore        0
Geography          0
Gender             0
Age                0
Tenure             0
Balance            0
NumOfProducts      0
HasCrCard          0
IsActiveMember     0
EstimatedSalary    0
Exited             0
dtype: int64

There is no missing values or other types of noise in dataset.

In [11]:
df.describe().round(2)
Out[11]:
CustomerId CreditScore Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
count 10000.00 10000.00 10000.00 10000.00 10000.00 10000.00 10000.00 10000.00 10000.00 10000.0
mean 15690940.57 650.53 38.92 5.01 76485.89 1.53 0.71 0.52 100090.24 0.2
std 71936.19 96.65 10.49 2.89 62397.41 0.58 0.46 0.50 57510.49 0.4
min 15565701.00 350.00 18.00 0.00 0.00 1.00 0.00 0.00 11.58 0.0
25% 15628528.25 584.00 32.00 3.00 0.00 1.00 0.00 0.00 51002.11 0.0
50% 15690738.00 652.00 37.00 5.00 97198.54 1.00 1.00 1.00 100193.92 0.0
75% 15753233.75 718.00 44.00 7.00 127644.24 2.00 1.00 1.00 149388.25 0.0
max 15815690.00 850.00 92.00 10.00 250898.09 4.00 1.00 1.00 199992.48 1.0

2. Drop the columns which are unique for all users like IDs &

3.Distinguish the feature and target set

In [12]:
# Convert data into feature and Target set. Also CustomerId and Surname will not contribute to model building
#hence we wil drop these 2 colmns as well
X=df.drop(labels=['CustomerId','Surname','Exited'], axis=1) # Feature Set
y=df['Exited'] # Target set
In [13]:
#target variable is y=df['Exited']
#look at distribution of exited and non-exited customers
In [14]:
sns.countplot(x="Exited", data=df)
Out[14]:
<matplotlib.axes._subplots.AxesSubplot at 0x1dec4153988>
In [15]:
labels = 'Exited', 'Retained'
sizes = [df.Exited[df['Exited']==1].count(), df.Exited[df['Exited']==0].count()]
explode = (0, 0.1)
fig1, ax1 = plt.subplots(figsize=(10, 8))
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
        shadow=True, startangle=90)
ax1.axis('equal')
plt.title("Proportion of customer churned and retained", size = 20)
plt.show()

Main Observations

The dependent variable (Exited), the value that we are going to predict, will be the exit of the customer from the bank (binary variable 0 if the customer stays and 1 if the client exit).
The independent variables will be

  1. Credit Score: reliability of the customer
  2. Geography: where is the customer from
  3. Gender: Male or Female
  4. Age
  5. Tenure: number of years of customer history in the company
  6. Balance: the money in the bank account
  7. Number of products of the customer in the bank
  8. HasCrCard: if the customer has or not the CC
  9. IsActiveMember: if the customer is active or not 10.Estimated Salary: estimation of salary based on the entries

Data set has has only around 2000 exited customers and about 8000 Customers are still with Bank- it has bias towards existing customers.
Note: So about 20% of the customers have churned. So the baseline model could be to predict that 20% of the customers will churn. Given 20% is a small number, we need to ensure that the chosen model does predict with great accuracy this 20% as it is of interest to the bank to identify and keep this bunch as opposed to accurately predicting the customers that are retained.

In [16]:
sns.countplot(x="Gender", data=df)
Out[16]:
<matplotlib.axes._subplots.AxesSubplot at 0x1dec4918588>

Bank has bout 4500 female customers and 5500 male customers

In [17]:
sns.countplot(x="Geography", data=df)
Out[17]:
<matplotlib.axes._subplots.AxesSubplot at 0x1dec4961c48>

Most of the Customers are from France, Customers from spain and Genrmany are about half in numbers of France

In [18]:
 sns.countplot(x="Exited", hue="Gender", data=df)
Out[18]:
<matplotlib.axes._subplots.AxesSubplot at 0x1dec49b6908>

Above plot says that female customers have higher propensity to exit the Bank

In [19]:
 sns.countplot(x="Exited", hue="Geography", data=df)
Out[19]:
<matplotlib.axes._subplots.AxesSubplot at 0x1dec4a36188>
In [20]:
df.select_dtypes(exclude='object').hist(figsize=(14,10),bins=20)
sns.countplot(df['Exited'])
df['Exited'].value_counts().unique()
corr = df.corr()
corr.style.background_gradient(cmap='Greens').set_precision(2)
Out[20]:
CustomerId CreditScore Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
CustomerId 1.00 0.01 0.01 -0.01 -0.01 0.02 -0.01 0.00 0.02 -0.01
CreditScore 0.01 1.00 -0.00 0.00 0.01 0.01 -0.01 0.03 -0.00 -0.03
Age 0.01 -0.00 1.00 -0.01 0.03 -0.03 -0.01 0.09 -0.01 0.29
Tenure -0.01 0.00 -0.01 1.00 -0.01 0.01 0.02 -0.03 0.01 -0.01
Balance -0.01 0.01 0.03 -0.01 1.00 -0.30 -0.01 -0.01 0.01 0.12
NumOfProducts 0.02 0.01 -0.03 0.01 -0.30 1.00 0.00 0.01 0.01 -0.05
HasCrCard -0.01 -0.01 -0.01 0.02 -0.01 0.00 1.00 -0.01 -0.01 -0.01
IsActiveMember 0.00 0.03 0.09 -0.03 -0.01 0.01 -0.01 1.00 -0.01 -0.16
EstimatedSalary 0.02 -0.00 -0.01 0.01 0.01 0.01 -0.01 -0.01 1.00 0.01
Exited -0.01 -0.03 0.29 -0.01 0.12 -0.05 -0.01 -0.16 0.01 1.00
In [21]:
#Rates of credit card’s usage according to gender
fig = px.parallel_categories(df, dimensions=['Gender', 'Geography', 'Exited'],
                color="Exited", color_continuous_scale=px.colors.sequential.Inferno,
                labels={'Gender':'Gender(Female,Male)', 'Exited':'Exited(0:No,1:Yes)'})
fig.update_layout(title_text="Gender-Geography-Exited-Not Exited Schema")
fig.show();

Around 20% of people exited. Proportionally more Females exited(1139 Feamle/898 Male). Germans proportionally and numerically exited more(448 Female + 366 Male) French females numerically exited more (460) while French Customers stays at the bank more than all others (5014-810=4196)

In [22]:
fig = px.parallel_categories(df, dimensions=['Gender','HasCrCard',"IsActiveMember", 'Exited'],
                color="Exited", color_continuous_scale=px.colors.sequential.Inferno,
                labels={'HasCrCard':'Has Credit Card', 'Gender':'Gender(Female,Male)', 'Exited':'Exited(0:No,1:Yes)'})
fig.update_layout(title_text="Credit Card-Gender-Exited-Not Exited Schema")
fig.show();

At the above dynamic presntation we can see how the categories effect exiting decision: Females and inactive members are more prone to exit. Credit card users are also a bit more prone to exit than non credit card users.

Customers from Germany have highest propensity to to exit the Bank

In [23]:
#Lets Check Distribution of exited/non-exited Customers as per the age
In [24]:
plt.figure(figsize=(15, 8))
sns.distplot(df['Age'][df['Exited']==0],color='blue',label='non-exited')
sns.distplot(df['Age'][df['Exited']==1],color='red',label='exited')
plt.show()

Age distribution of customers who exited bank is normally distributed while those who stays with bank is right skewed indicating that most of the existing customers of bank are lower than 50 years of age. This may also indicate that old age customers have exited the bank.

now we are going to see the important column and the more powerfull column 'geography'. and i want to visualize this column with plotly, because it interactive visualization library.

In [25]:
France = float(df[df['Geography']=='France']['Geography'].count())
Spain = float(df[df['Geography']=='Spain']['Geography'].count())
Germany = float(df[df['Geography']=='Germany']['Geography'].count())
print(France+Spain+Germany)
10000.0
In [26]:
import chart_studio.plotly as py
import plotly.graph_objects as go

import plotly.graph_objs as go 
from plotly.offline import download_plotlyjs,init_notebook_mode,plot,iplot
init_notebook_mode(connected=True)


data = dict(type='choropleth',
           locations=['ESP','FRA','DEU'],
           colorscale='YlGnBu',
           text = ['Spain','France','Germany'],
           z=[France,Spain,Germany],
           colorbar={'title':'number in each geography'})
layout = dict(title='Counting the numbers of each nationality',
              geo=dict(showframe=False,projection={'type':'natural earth'}))
choromap = go.Figure(data=[data],layout=layout)
In [27]:
iplot(choromap)
In [28]:
X.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000 entries, 1 to 10000
Data columns (total 10 columns):
 #   Column           Non-Null Count  Dtype  
---  ------           --------------  -----  
 0   CreditScore      10000 non-null  int64  
 1   Geography        10000 non-null  object 
 2   Gender           10000 non-null  object 
 3   Age              10000 non-null  int64  
 4   Tenure           10000 non-null  int64  
 5   Balance          10000 non-null  float64
 6   NumOfProducts    10000 non-null  int64  
 7   HasCrCard        10000 non-null  int64  
 8   IsActiveMember   10000 non-null  int64  
 9   EstimatedSalary  10000 non-null  float64
dtypes: float64(2), int64(6), object(2)
memory usage: 1.2+ MB
In [29]:
# Geography and gender are object type, we will convert this into one hot encoding
In [30]:
X= pd.get_dummies(X)
In [31]:
X.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000 entries, 1 to 10000
Data columns (total 13 columns):
 #   Column             Non-Null Count  Dtype  
---  ------             --------------  -----  
 0   CreditScore        10000 non-null  int64  
 1   Age                10000 non-null  int64  
 2   Tenure             10000 non-null  int64  
 3   Balance            10000 non-null  float64
 4   NumOfProducts      10000 non-null  int64  
 5   HasCrCard          10000 non-null  int64  
 6   IsActiveMember     10000 non-null  int64  
 7   EstimatedSalary    10000 non-null  float64
 8   Geography_France   10000 non-null  uint8  
 9   Geography_Germany  10000 non-null  uint8  
 10  Geography_Spain    10000 non-null  uint8  
 11  Gender_Female      10000 non-null  uint8  
 12  Gender_Male        10000 non-null  uint8  
dtypes: float64(2), int64(6), uint8(5)
memory usage: 1.0 MB

Object columns- Geography and Genders have been converted to one hot encoded columns

In [32]:
#Lets Check first few rows of feature set
In [33]:
X.head()
Out[33]:
CreditScore Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Geography_France Geography_Germany Geography_Spain Gender_Female Gender_Male
RowNumber
1 619 42 2 0.00 1 1 1 101348.88 1 0 0 1 0
2 608 41 1 83807.86 1 0 1 112542.58 0 0 1 1 0
3 502 42 8 159660.80 3 1 0 113931.57 1 0 0 1 0
4 699 39 1 0.00 2 0 0 93826.63 1 0 0 1 0
5 850 43 2 125510.82 1 1 1 79084.10 0 0 1 1 0

4. Divide the data set into training and test sets

In [34]:
from sklearn.model_selection import train_test_split
#test train split
test_size = 0.30 # taking 70:30 training and test set
seed = 7  # Random numbmer seeding for reapeatability of the code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed)
In [35]:
#hCheck Shape of test/trainset
X_train.shape, X_test.shape, y_train.shape, y_test.shape
Out[35]:
((7000, 13), (3000, 13), (7000,), (3000,))

5. Normalize the train and test data

Normalisation Following features that have running/continuous values using standard scaler: CreditScore, Age, tenure, Balance, NumOfProducts, EstimatedSalary.

We will not normalise following features as they have discrete values either 0 or 1: HasCrCard, IsActiveMember, Geography_France, Geography_Germany, Geography_Spain, Gender_Female, Gender_Male

In [36]:
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
In [37]:
X_train[['CreditScore','Age','Tenure','Balance','NumOfProducts','EstimatedSalary']].head(2)
Out[37]:
CreditScore Age Tenure Balance NumOfProducts EstimatedSalary
RowNumber
2318 630 36 2 110414.48 1 48984.95
260 850 38 3 54901.01 1 140075.55
In [38]:
X_train.head(2)
Out[38]:
CreditScore Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Geography_France Geography_Germany Geography_Spain Gender_Female Gender_Male
RowNumber
2318 630 36 2 110414.48 1 1 1 48984.95 1 0 0 1 0
260 850 38 3 54901.01 1 1 1 140075.55 0 1 0 0 1
In [39]:
scaler.fit(X_train[['CreditScore','Age','Tenure','Balance','NumOfProducts','EstimatedSalary']])
Out[39]:
StandardScaler()
In [40]:
X_train_scaled=scaler.transform(X_train[['CreditScore','Age','Tenure','Balance','NumOfProducts','EstimatedSalary']])
In [41]:
# Transform test set on the same fit as train set
X_test_scaled=scaler.transform(X_test[['CreditScore','Age','Tenure','Balance','NumOfProducts','EstimatedSalary']])
The Following step, puts back the scaled data into the dataframe for the columns which have been scaled while keeping other data intact
In [42]:
# Put back scaled data into the dataframe for the columns which  have been scaled while keeping other data intact
X_train[['CreditScore','Age','Tenure','Balance','NumOfProducts','EstimatedSalary']]=X_train_scaled
In [43]:
X_train.head(2)
Out[43]:
CreditScore Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Geography_France Geography_Germany Geography_Spain Gender_Female Gender_Male
RowNumber
2318 -0.212665 -0.275584 -1.044043 0.535759 -0.902887 1 1 -0.885624 1 0 0 1 0
260 2.072829 -0.085732 -0.699815 -0.354525 -0.902887 1 1 0.690326 0 1 0 0 1
In [44]:
X_test[['CreditScore','Age','Tenure','Balance','NumOfProducts','EstimatedSalary']]=X_test_scaled
In [45]:
X_test.head(2)
Out[45]:
CreditScore Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Geography_France Geography_Germany Geography_Spain Gender_Female Gender_Male
RowNumber
1978 0.691144 -0.370510 -1.388270 -1.234987 2.518064 0 0 1.290574 0 0 1 0 1
3881 0.275599 3.141754 1.021323 -1.234987 0.807589 1 1 0.924388 1 0 0 1 0
Convert Data into Numpy arrays
In [46]:
# Convert Data into Numpy arrays
X_train_array=np.array(X_train)
X_test_array=np.array(X_test)
y_train_array=np.array(y_train)
y_test_array=np.array(y_test)
In [47]:
X_train_array.shape,X_test_array.shape,y_train_array.shape,y_test_array.shape#check shapes of array
Out[47]:
((7000, 13), (3000, 13), (7000,), (3000,))

6. MODEL BUILDING

6.A Initialize & build the model (Basic Model with two (2) hidden layers)

In [48]:
# Initialize Sequential model
model = tf.keras.models.Sequential()


# Add Input layer to the model
model.add(tf.keras.Input(shape=(13,))) # 13 Features

# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())

# Hidden layers
model.add(tf.keras.layers.Dense(13, activation='relu', name='Layer_1'))
model.add(tf.keras.layers.Dense(10, activation='relu', name='Layer_2'))

#Output layer
model.add(tf.keras.layers.Dense(1, activation='sigmoid', name='Output'))
Compile Model
In [49]:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
Summarise Model
In [50]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
batch_normalization (BatchNo (None, 13)                52        
_________________________________________________________________
Layer_1 (Dense)              (None, 13)                182       
_________________________________________________________________
Layer_2 (Dense)              (None, 10)                140       
_________________________________________________________________
Output (Dense)               (None, 1)                 11        
=================================================================
Total params: 385
Trainable params: 359
Non-trainable params: 26
_________________________________________________________________

Fit Model & Prediction

In [51]:
model.fit(X_train_array, y_train_array, validation_data=(X_test_array, y_test_array), epochs=150,
          batch_size = 32)
Epoch 1/150
219/219 [==============================] - 0s 2ms/step - loss: 0.5110 - accuracy: 0.7720 - val_loss: 0.4457 - val_accuracy: 0.7997
Epoch 2/150
219/219 [==============================] - 0s 984us/step - loss: 0.4418 - accuracy: 0.8053 - val_loss: 0.4252 - val_accuracy: 0.8210
Epoch 3/150
219/219 [==============================] - 0s 984us/step - loss: 0.4283 - accuracy: 0.8170 - val_loss: 0.4173 - val_accuracy: 0.8250
Epoch 4/150
219/219 [==============================] - 0s 910us/step - loss: 0.4209 - accuracy: 0.8217 - val_loss: 0.4118 - val_accuracy: 0.8273
Epoch 5/150
219/219 [==============================] - 0s 886us/step - loss: 0.4177 - accuracy: 0.8244 - val_loss: 0.4088 - val_accuracy: 0.8273
Epoch 6/150
219/219 [==============================] - 0s 955us/step - loss: 0.4109 - accuracy: 0.8297 - val_loss: 0.4037 - val_accuracy: 0.8307
Epoch 7/150
219/219 [==============================] - 0s 942us/step - loss: 0.4123 - accuracy: 0.8254 - val_loss: 0.4003 - val_accuracy: 0.8323
Epoch 8/150
219/219 [==============================] - 0s 908us/step - loss: 0.4059 - accuracy: 0.8347 - val_loss: 0.3942 - val_accuracy: 0.8330
Epoch 9/150
219/219 [==============================] - 0s 983us/step - loss: 0.4003 - accuracy: 0.8353 - val_loss: 0.3885 - val_accuracy: 0.8347
Epoch 10/150
219/219 [==============================] - 0s 912us/step - loss: 0.3976 - accuracy: 0.8360 - val_loss: 0.3805 - val_accuracy: 0.8363
Epoch 11/150
219/219 [==============================] - 0s 938us/step - loss: 0.3893 - accuracy: 0.8387 - val_loss: 0.3705 - val_accuracy: 0.8480
Epoch 12/150
219/219 [==============================] - 0s 918us/step - loss: 0.3780 - accuracy: 0.8437 - val_loss: 0.3609 - val_accuracy: 0.8523
Epoch 13/150
219/219 [==============================] - 0s 933us/step - loss: 0.3712 - accuracy: 0.8426 - val_loss: 0.3523 - val_accuracy: 0.8583
Epoch 14/150
219/219 [==============================] - 0s 891us/step - loss: 0.3663 - accuracy: 0.8457 - val_loss: 0.3482 - val_accuracy: 0.8620
Epoch 15/150
219/219 [==============================] - 0s 941us/step - loss: 0.3625 - accuracy: 0.8493 - val_loss: 0.3478 - val_accuracy: 0.8610
Epoch 16/150
219/219 [==============================] - 0s 911us/step - loss: 0.3552 - accuracy: 0.8557 - val_loss: 0.3441 - val_accuracy: 0.8637
Epoch 17/150
219/219 [==============================] - 0s 933us/step - loss: 0.3558 - accuracy: 0.8519 - val_loss: 0.3434 - val_accuracy: 0.8650
Epoch 18/150
219/219 [==============================] - 0s 834us/step - loss: 0.3552 - accuracy: 0.8519 - val_loss: 0.3426 - val_accuracy: 0.8620
Epoch 19/150
219/219 [==============================] - 0s 957us/step - loss: 0.3542 - accuracy: 0.8509 - val_loss: 0.3417 - val_accuracy: 0.8640
Epoch 20/150
219/219 [==============================] - 0s 953us/step - loss: 0.3554 - accuracy: 0.8534 - val_loss: 0.3441 - val_accuracy: 0.8600
Epoch 21/150
219/219 [==============================] - 0s 951us/step - loss: 0.3550 - accuracy: 0.8510 - val_loss: 0.3402 - val_accuracy: 0.8637
Epoch 22/150
219/219 [==============================] - 0s 917us/step - loss: 0.3510 - accuracy: 0.8574 - val_loss: 0.3407 - val_accuracy: 0.8640
Epoch 23/150
219/219 [==============================] - 0s 923us/step - loss: 0.3516 - accuracy: 0.8550 - val_loss: 0.3415 - val_accuracy: 0.8607
Epoch 24/150
219/219 [==============================] - 0s 974us/step - loss: 0.3539 - accuracy: 0.8503 - val_loss: 0.3420 - val_accuracy: 0.8603
Epoch 25/150
219/219 [==============================] - 0s 919us/step - loss: 0.3507 - accuracy: 0.8513 - val_loss: 0.3393 - val_accuracy: 0.8633
Epoch 26/150
219/219 [==============================] - 0s 954us/step - loss: 0.3513 - accuracy: 0.8534 - val_loss: 0.3401 - val_accuracy: 0.8633
Epoch 27/150
219/219 [==============================] - 0s 897us/step - loss: 0.3487 - accuracy: 0.8541 - val_loss: 0.3398 - val_accuracy: 0.8643
Epoch 28/150
219/219 [==============================] - 0s 984us/step - loss: 0.3463 - accuracy: 0.8557 - val_loss: 0.3397 - val_accuracy: 0.8617
Epoch 29/150
219/219 [==============================] - 0s 902us/step - loss: 0.3479 - accuracy: 0.8561 - val_loss: 0.3400 - val_accuracy: 0.8657
Epoch 30/150
219/219 [==============================] - 0s 949us/step - loss: 0.3446 - accuracy: 0.8556 - val_loss: 0.3390 - val_accuracy: 0.8643
Epoch 31/150
219/219 [==============================] - 0s 949us/step - loss: 0.3467 - accuracy: 0.8577 - val_loss: 0.3394 - val_accuracy: 0.8607
Epoch 32/150
219/219 [==============================] - 0s 947us/step - loss: 0.3474 - accuracy: 0.8544 - val_loss: 0.3390 - val_accuracy: 0.8603
Epoch 33/150
219/219 [==============================] - 0s 961us/step - loss: 0.3504 - accuracy: 0.8560 - val_loss: 0.3400 - val_accuracy: 0.8653
Epoch 34/150
219/219 [==============================] - 0s 893us/step - loss: 0.3499 - accuracy: 0.8550 - val_loss: 0.3389 - val_accuracy: 0.8620
Epoch 35/150
219/219 [==============================] - 0s 923us/step - loss: 0.3484 - accuracy: 0.8536 - val_loss: 0.3386 - val_accuracy: 0.8607
Epoch 36/150
219/219 [==============================] - 0s 961us/step - loss: 0.3502 - accuracy: 0.8503 - val_loss: 0.3386 - val_accuracy: 0.8637
Epoch 37/150
219/219 [==============================] - 0s 876us/step - loss: 0.3472 - accuracy: 0.8539 - val_loss: 0.3381 - val_accuracy: 0.8580
Epoch 38/150
219/219 [==============================] - 0s 946us/step - loss: 0.3450 - accuracy: 0.8556 - val_loss: 0.3368 - val_accuracy: 0.8610
Epoch 39/150
219/219 [==============================] - 0s 833us/step - loss: 0.3433 - accuracy: 0.8550 - val_loss: 0.3372 - val_accuracy: 0.8643
Epoch 40/150
219/219 [==============================] - 0s 906us/step - loss: 0.3463 - accuracy: 0.8564 - val_loss: 0.3366 - val_accuracy: 0.8617
Epoch 41/150
219/219 [==============================] - 0s 883us/step - loss: 0.3415 - accuracy: 0.8587 - val_loss: 0.3368 - val_accuracy: 0.8630
Epoch 42/150
219/219 [==============================] - 0s 882us/step - loss: 0.3457 - accuracy: 0.8559 - val_loss: 0.3373 - val_accuracy: 0.8587
Epoch 43/150
219/219 [==============================] - 0s 954us/step - loss: 0.3480 - accuracy: 0.8571 - val_loss: 0.3369 - val_accuracy: 0.8580
Epoch 44/150
219/219 [==============================] - 0s 842us/step - loss: 0.3452 - accuracy: 0.8574 - val_loss: 0.3387 - val_accuracy: 0.8580
Epoch 45/150
219/219 [==============================] - 0s 868us/step - loss: 0.3454 - accuracy: 0.8551 - val_loss: 0.3379 - val_accuracy: 0.8623
Epoch 46/150
219/219 [==============================] - 0s 907us/step - loss: 0.3443 - accuracy: 0.8601 - val_loss: 0.3362 - val_accuracy: 0.8643
Epoch 47/150
219/219 [==============================] - 0s 941us/step - loss: 0.3453 - accuracy: 0.8573 - val_loss: 0.3355 - val_accuracy: 0.8643
Epoch 48/150
219/219 [==============================] - 0s 956us/step - loss: 0.3405 - accuracy: 0.8597 - val_loss: 0.3365 - val_accuracy: 0.8613
Epoch 49/150
219/219 [==============================] - 0s 897us/step - loss: 0.3439 - accuracy: 0.8559 - val_loss: 0.3364 - val_accuracy: 0.8657
Epoch 50/150
219/219 [==============================] - 0s 959us/step - loss: 0.3426 - accuracy: 0.8570 - val_loss: 0.3369 - val_accuracy: 0.8590
Epoch 51/150
219/219 [==============================] - 0s 880us/step - loss: 0.3417 - accuracy: 0.8560 - val_loss: 0.3370 - val_accuracy: 0.8593
Epoch 52/150
219/219 [==============================] - 0s 931us/step - loss: 0.3404 - accuracy: 0.8606 - val_loss: 0.3348 - val_accuracy: 0.8627
Epoch 53/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3457 - accuracy: 0.8566 - val_loss: 0.3361 - val_accuracy: 0.8617
Epoch 54/150
219/219 [==============================] - 0s 888us/step - loss: 0.3441 - accuracy: 0.8549 - val_loss: 0.3365 - val_accuracy: 0.8657
Epoch 55/150
219/219 [==============================] - 0s 919us/step - loss: 0.3385 - accuracy: 0.8573 - val_loss: 0.3368 - val_accuracy: 0.8600
Epoch 56/150
219/219 [==============================] - 0s 932us/step - loss: 0.3414 - accuracy: 0.8566 - val_loss: 0.3368 - val_accuracy: 0.8593
Epoch 57/150
219/219 [==============================] - 0s 879us/step - loss: 0.3436 - accuracy: 0.8553 - val_loss: 0.3361 - val_accuracy: 0.8637
Epoch 58/150
219/219 [==============================] - 0s 911us/step - loss: 0.3416 - accuracy: 0.8583 - val_loss: 0.3360 - val_accuracy: 0.8603
Epoch 59/150
219/219 [==============================] - 0s 841us/step - loss: 0.3393 - accuracy: 0.8591 - val_loss: 0.3362 - val_accuracy: 0.8620
Epoch 60/150
219/219 [==============================] - 0s 878us/step - loss: 0.3408 - accuracy: 0.8561 - val_loss: 0.3361 - val_accuracy: 0.8600
Epoch 61/150
219/219 [==============================] - 0s 917us/step - loss: 0.3416 - accuracy: 0.8567 - val_loss: 0.3366 - val_accuracy: 0.8703
Epoch 62/150
219/219 [==============================] - 0s 839us/step - loss: 0.3395 - accuracy: 0.8601 - val_loss: 0.3349 - val_accuracy: 0.8603
Epoch 63/150
219/219 [==============================] - 0s 938us/step - loss: 0.3413 - accuracy: 0.8596 - val_loss: 0.3354 - val_accuracy: 0.8617
Epoch 64/150
219/219 [==============================] - 0s 907us/step - loss: 0.3406 - accuracy: 0.8550 - val_loss: 0.3355 - val_accuracy: 0.8613
Epoch 65/150
219/219 [==============================] - 0s 889us/step - loss: 0.3383 - accuracy: 0.8600 - val_loss: 0.3361 - val_accuracy: 0.8640
Epoch 66/150
219/219 [==============================] - 0s 883us/step - loss: 0.3418 - accuracy: 0.8559 - val_loss: 0.3367 - val_accuracy: 0.8637
Epoch 67/150
219/219 [==============================] - 0s 856us/step - loss: 0.3372 - accuracy: 0.8601 - val_loss: 0.3364 - val_accuracy: 0.8663
Epoch 68/150
219/219 [==============================] - 0s 960us/step - loss: 0.3381 - accuracy: 0.8603 - val_loss: 0.3365 - val_accuracy: 0.8653
Epoch 69/150
219/219 [==============================] - 0s 909us/step - loss: 0.3403 - accuracy: 0.8590 - val_loss: 0.3365 - val_accuracy: 0.8647
Epoch 70/150
219/219 [==============================] - 0s 887us/step - loss: 0.3389 - accuracy: 0.8577 - val_loss: 0.3393 - val_accuracy: 0.8590
Epoch 71/150
219/219 [==============================] - 0s 893us/step - loss: 0.3432 - accuracy: 0.8566 - val_loss: 0.3359 - val_accuracy: 0.8637
Epoch 72/150
219/219 [==============================] - 0s 881us/step - loss: 0.3402 - accuracy: 0.8591 - val_loss: 0.3359 - val_accuracy: 0.8637
Epoch 73/150
219/219 [==============================] - 0s 915us/step - loss: 0.3360 - accuracy: 0.8596 - val_loss: 0.3365 - val_accuracy: 0.8603
Epoch 74/150
219/219 [==============================] - 0s 897us/step - loss: 0.3379 - accuracy: 0.8596 - val_loss: 0.3374 - val_accuracy: 0.8617
Epoch 75/150
219/219 [==============================] - 0s 937us/step - loss: 0.3404 - accuracy: 0.8561 - val_loss: 0.3364 - val_accuracy: 0.8640
Epoch 76/150
219/219 [==============================] - 0s 887us/step - loss: 0.3382 - accuracy: 0.8616 - val_loss: 0.3358 - val_accuracy: 0.8633
Epoch 77/150
219/219 [==============================] - 0s 849us/step - loss: 0.3376 - accuracy: 0.8591 - val_loss: 0.3367 - val_accuracy: 0.8597
Epoch 78/150
219/219 [==============================] - 0s 966us/step - loss: 0.3412 - accuracy: 0.8563 - val_loss: 0.3365 - val_accuracy: 0.8647
Epoch 79/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3384 - accuracy: 0.8596 - val_loss: 0.3361 - val_accuracy: 0.8620
Epoch 80/150
219/219 [==============================] - 0s 856us/step - loss: 0.3341 - accuracy: 0.8623 - val_loss: 0.3377 - val_accuracy: 0.8647
Epoch 81/150
219/219 [==============================] - 0s 899us/step - loss: 0.3358 - accuracy: 0.8626 - val_loss: 0.3373 - val_accuracy: 0.8627
Epoch 82/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3365 - accuracy: 0.8604 - val_loss: 0.3383 - val_accuracy: 0.8647
Epoch 83/150
219/219 [==============================] - 0s 883us/step - loss: 0.3385 - accuracy: 0.8590 - val_loss: 0.3369 - val_accuracy: 0.8583
Epoch 84/150
219/219 [==============================] - 0s 821us/step - loss: 0.3378 - accuracy: 0.8583 - val_loss: 0.3375 - val_accuracy: 0.8643
Epoch 85/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3370 - accuracy: 0.8570 - val_loss: 0.3383 - val_accuracy: 0.8587
Epoch 86/150
219/219 [==============================] - 0s 905us/step - loss: 0.3359 - accuracy: 0.8590 - val_loss: 0.3385 - val_accuracy: 0.8637
Epoch 87/150
219/219 [==============================] - 0s 838us/step - loss: 0.3355 - accuracy: 0.8607 - val_loss: 0.3380 - val_accuracy: 0.8567
Epoch 88/150
219/219 [==============================] - 0s 947us/step - loss: 0.3371 - accuracy: 0.8606 - val_loss: 0.3371 - val_accuracy: 0.8623
Epoch 89/150
219/219 [==============================] - 0s 824us/step - loss: 0.3325 - accuracy: 0.8629 - val_loss: 0.3381 - val_accuracy: 0.8623
Epoch 90/150
219/219 [==============================] - 0s 838us/step - loss: 0.3369 - accuracy: 0.8584 - val_loss: 0.3390 - val_accuracy: 0.8637
Epoch 91/150
219/219 [==============================] - 0s 935us/step - loss: 0.3327 - accuracy: 0.8640 - val_loss: 0.3379 - val_accuracy: 0.8610
Epoch 92/150
219/219 [==============================] - 0s 802us/step - loss: 0.3356 - accuracy: 0.8616 - val_loss: 0.3381 - val_accuracy: 0.8580
Epoch 93/150
219/219 [==============================] - 0s 965us/step - loss: 0.3353 - accuracy: 0.8600 - val_loss: 0.3393 - val_accuracy: 0.8613
Epoch 94/150
219/219 [==============================] - 0s 893us/step - loss: 0.3360 - accuracy: 0.8613 - val_loss: 0.3392 - val_accuracy: 0.8597
Epoch 95/150
219/219 [==============================] - 0s 957us/step - loss: 0.3401 - accuracy: 0.8601 - val_loss: 0.3391 - val_accuracy: 0.8643
Epoch 96/150
219/219 [==============================] - 0s 881us/step - loss: 0.3312 - accuracy: 0.8604 - val_loss: 0.3396 - val_accuracy: 0.8637
Epoch 97/150
219/219 [==============================] - 0s 855us/step - loss: 0.3329 - accuracy: 0.8591 - val_loss: 0.3378 - val_accuracy: 0.8627
Epoch 98/150
219/219 [==============================] - 0s 897us/step - loss: 0.3348 - accuracy: 0.8606 - val_loss: 0.3397 - val_accuracy: 0.8647
Epoch 99/150
219/219 [==============================] - 0s 898us/step - loss: 0.3342 - accuracy: 0.8620 - val_loss: 0.3390 - val_accuracy: 0.8623
Epoch 100/150
219/219 [==============================] - 0s 888us/step - loss: 0.3370 - accuracy: 0.8576 - val_loss: 0.3393 - val_accuracy: 0.8620
Epoch 101/150
219/219 [==============================] - 0s 955us/step - loss: 0.3347 - accuracy: 0.8624 - val_loss: 0.3374 - val_accuracy: 0.8637
Epoch 102/150
219/219 [==============================] - 0s 915us/step - loss: 0.3334 - accuracy: 0.8627 - val_loss: 0.3407 - val_accuracy: 0.8567
Epoch 103/150
219/219 [==============================] - 0s 947us/step - loss: 0.3319 - accuracy: 0.8626 - val_loss: 0.3393 - val_accuracy: 0.8627
Epoch 104/150
219/219 [==============================] - 0s 871us/step - loss: 0.3360 - accuracy: 0.8581 - val_loss: 0.3398 - val_accuracy: 0.8570
Epoch 105/150
219/219 [==============================] - 0s 883us/step - loss: 0.3337 - accuracy: 0.8639 - val_loss: 0.3395 - val_accuracy: 0.8603
Epoch 106/150
219/219 [==============================] - 0s 911us/step - loss: 0.3357 - accuracy: 0.8604 - val_loss: 0.3378 - val_accuracy: 0.8560
Epoch 107/150
219/219 [==============================] - 0s 833us/step - loss: 0.3370 - accuracy: 0.8591 - val_loss: 0.3381 - val_accuracy: 0.8597
Epoch 108/150
219/219 [==============================] - 0s 954us/step - loss: 0.3336 - accuracy: 0.8610 - val_loss: 0.3387 - val_accuracy: 0.8600
Epoch 109/150
219/219 [==============================] - 0s 826us/step - loss: 0.3325 - accuracy: 0.8640 - val_loss: 0.3418 - val_accuracy: 0.8647
Epoch 110/150
219/219 [==============================] - 0s 955us/step - loss: 0.3314 - accuracy: 0.8617 - val_loss: 0.3409 - val_accuracy: 0.8650
Epoch 111/150
219/219 [==============================] - 0s 879us/step - loss: 0.3362 - accuracy: 0.8611 - val_loss: 0.3396 - val_accuracy: 0.8577
Epoch 112/150
219/219 [==============================] - 0s 956us/step - loss: 0.3332 - accuracy: 0.8611 - val_loss: 0.3398 - val_accuracy: 0.8610
Epoch 113/150
219/219 [==============================] - 0s 892us/step - loss: 0.3324 - accuracy: 0.8624 - val_loss: 0.3382 - val_accuracy: 0.8580
Epoch 114/150
219/219 [==============================] - 0s 848us/step - loss: 0.3322 - accuracy: 0.8629 - val_loss: 0.3403 - val_accuracy: 0.8587
Epoch 115/150
219/219 [==============================] - 0s 907us/step - loss: 0.3335 - accuracy: 0.8601 - val_loss: 0.3410 - val_accuracy: 0.8597
Epoch 116/150
219/219 [==============================] - 0s 897us/step - loss: 0.3330 - accuracy: 0.8620 - val_loss: 0.3401 - val_accuracy: 0.8597
Epoch 117/150
219/219 [==============================] - 0s 879us/step - loss: 0.3356 - accuracy: 0.8601 - val_loss: 0.3397 - val_accuracy: 0.8573
Epoch 118/150
219/219 [==============================] - 0s 963us/step - loss: 0.3352 - accuracy: 0.8601 - val_loss: 0.3377 - val_accuracy: 0.8570
Epoch 119/150
219/219 [==============================] - 0s 878us/step - loss: 0.3334 - accuracy: 0.8623 - val_loss: 0.3414 - val_accuracy: 0.8657
Epoch 120/150
219/219 [==============================] - 0s 894us/step - loss: 0.3328 - accuracy: 0.8620 - val_loss: 0.3384 - val_accuracy: 0.8660
Epoch 121/150
219/219 [==============================] - 0s 914us/step - loss: 0.3329 - accuracy: 0.8623 - val_loss: 0.3375 - val_accuracy: 0.8600
Epoch 122/150
219/219 [==============================] - 0s 860us/step - loss: 0.3350 - accuracy: 0.8623 - val_loss: 0.3382 - val_accuracy: 0.8623
Epoch 123/150
219/219 [==============================] - 0s 895us/step - loss: 0.3341 - accuracy: 0.8611 - val_loss: 0.3375 - val_accuracy: 0.8627
Epoch 124/150
219/219 [==============================] - 0s 902us/step - loss: 0.3312 - accuracy: 0.8629 - val_loss: 0.3384 - val_accuracy: 0.8583
Epoch 125/150
219/219 [==============================] - 0s 893us/step - loss: 0.3333 - accuracy: 0.8594 - val_loss: 0.3391 - val_accuracy: 0.8623
Epoch 126/150
219/219 [==============================] - 0s 874us/step - loss: 0.3330 - accuracy: 0.8600 - val_loss: 0.3392 - val_accuracy: 0.8590
Epoch 127/150
219/219 [==============================] - 0s 888us/step - loss: 0.3292 - accuracy: 0.8659 - val_loss: 0.3381 - val_accuracy: 0.8590
Epoch 128/150
219/219 [==============================] - 0s 882us/step - loss: 0.3319 - accuracy: 0.8604 - val_loss: 0.3386 - val_accuracy: 0.8650
Epoch 129/150
219/219 [==============================] - 0s 938us/step - loss: 0.3337 - accuracy: 0.8626 - val_loss: 0.3392 - val_accuracy: 0.8600
Epoch 130/150
219/219 [==============================] - 0s 897us/step - loss: 0.3324 - accuracy: 0.8649 - val_loss: 0.3379 - val_accuracy: 0.8587
Epoch 131/150
219/219 [==============================] - 0s 851us/step - loss: 0.3340 - accuracy: 0.8603 - val_loss: 0.3373 - val_accuracy: 0.8590
Epoch 132/150
219/219 [==============================] - 0s 952us/step - loss: 0.3375 - accuracy: 0.8601 - val_loss: 0.3375 - val_accuracy: 0.8600
Epoch 133/150
219/219 [==============================] - 0s 856us/step - loss: 0.3295 - accuracy: 0.8656 - val_loss: 0.3376 - val_accuracy: 0.8593
Epoch 134/150
219/219 [==============================] - 0s 878us/step - loss: 0.3339 - accuracy: 0.8611 - val_loss: 0.3392 - val_accuracy: 0.8570
Epoch 135/150
219/219 [==============================] - 0s 887us/step - loss: 0.3289 - accuracy: 0.8663 - val_loss: 0.3380 - val_accuracy: 0.8587
Epoch 136/150
219/219 [==============================] - 0s 901us/step - loss: 0.3319 - accuracy: 0.8644 - val_loss: 0.3376 - val_accuracy: 0.8587
Epoch 137/150
219/219 [==============================] - 0s 892us/step - loss: 0.3325 - accuracy: 0.8623 - val_loss: 0.3376 - val_accuracy: 0.8617
Epoch 138/150
219/219 [==============================] - 0s 902us/step - loss: 0.3365 - accuracy: 0.8593 - val_loss: 0.3404 - val_accuracy: 0.8593
Epoch 139/150
219/219 [==============================] - 0s 865us/step - loss: 0.3343 - accuracy: 0.8607 - val_loss: 0.3382 - val_accuracy: 0.8590
Epoch 140/150
219/219 [==============================] - 0s 950us/step - loss: 0.3278 - accuracy: 0.8614 - val_loss: 0.3382 - val_accuracy: 0.8610
Epoch 141/150
219/219 [==============================] - 0s 862us/step - loss: 0.3288 - accuracy: 0.8653 - val_loss: 0.3400 - val_accuracy: 0.8587
Epoch 142/150
219/219 [==============================] - 0s 874us/step - loss: 0.3366 - accuracy: 0.8607 - val_loss: 0.3390 - val_accuracy: 0.8630
Epoch 143/150
219/219 [==============================] - 0s 945us/step - loss: 0.3336 - accuracy: 0.8613 - val_loss: 0.3379 - val_accuracy: 0.8610
Epoch 144/150
219/219 [==============================] - 0s 847us/step - loss: 0.3338 - accuracy: 0.8610 - val_loss: 0.3377 - val_accuracy: 0.8587
Epoch 145/150
219/219 [==============================] - 0s 891us/step - loss: 0.3316 - accuracy: 0.8621 - val_loss: 0.3376 - val_accuracy: 0.8607
Epoch 146/150
219/219 [==============================] - 0s 896us/step - loss: 0.3328 - accuracy: 0.8597 - val_loss: 0.3388 - val_accuracy: 0.8637
Epoch 147/150
219/219 [==============================] - 0s 911us/step - loss: 0.3314 - accuracy: 0.8631 - val_loss: 0.3375 - val_accuracy: 0.8583
Epoch 148/150
219/219 [==============================] - 0s 961us/step - loss: 0.3297 - accuracy: 0.8634 - val_loss: 0.3381 - val_accuracy: 0.8617
Epoch 149/150
219/219 [==============================] - 0s 856us/step - loss: 0.3326 - accuracy: 0.8621 - val_loss: 0.3376 - val_accuracy: 0.8573
Epoch 150/150
219/219 [==============================] - 0s 861us/step - loss: 0.3353 - accuracy: 0.8607 - val_loss: 0.3375 - val_accuracy: 0.8617
Out[51]:
<tensorflow.python.keras.callbacks.History at 0x1dec6cf80c8>

7.A. Predict the results using 0.5 as a threshold

In [52]:
model.predict(X_test_array)[:5] # Observe first 5 probabilities
Out[52]:
array([[0.6467774 ],
       [0.02072319],
       [0.01633334],
       [0.7306629 ],
       [0.01818645]], dtype=float32)
In [53]:
th=0.5 # Threshold
y_test_preds = np.where(model.predict(X_test_array) > th, 1, 0)
In [54]:
y_test_preds[:5] # Observe First 5 predictions
Out[54]:
array([[1],
       [0],
       [0],
       [1],
       [0]])

8. A. Confusion Matrix, Accuracy, Recall, Precision, F1_Score

In [55]:
# Confusion matrix with optimal Threshold on test set
metrics.confusion_matrix(y_test, y_test_preds)
Out[55]:
array([[2303,   92],
       [ 323,  282]], dtype=int64)
In [56]:
keras_predictions = model.predict_classes(X_test_array, batch_size=200, verbose=1)
WARNING:tensorflow:From <ipython-input-56-e62fb794ef20>:1: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
15/15 [==============================] - 0s 665us/step
In [57]:
cm = confusion_matrix(y_test, keras_predictions)
actual_cm = confusion_matrix(y_test, y_test_preds)
labels = ['No Exited', 'Exited']

fig = plt.figure(figsize=(16,8))

fig.add_subplot(221)
plt.title("Confusion Matrix \n keras")
sns.heatmap(cm,annot=True,cmap="Blues",fmt="d",cbar=False)

plt.show()
In [58]:
print('Test Metrics at 0.5 Threshold with basic DNN model\n')
Test_Metrics_Basic_DNN=pd.DataFrame(data=[accuracy_score(y_test, y_test_preds), 
                   recall_score(y_test, y_test_preds), 
                   precision_score(y_test, y_test_preds),
                   f1_score(y_test, y_test_preds)], columns=['Basic DNN'],
             index=["accuracy", "recall", "precision", "f1_score"])
print(Test_Metrics_Basic_DNN)
Test Metrics at 0.5 Threshold with basic DNN model

           Basic DNN
accuracy    0.861667
recall      0.466116
precision   0.754011
f1_score    0.576098

IDENTIFY POINTS OF IMPROVEMENT - MODEL TUNING AND OPTIMIZATION

Parameters Tuning and Strategies

I want to focus on how to approach some characteristic elements of NNs, whose initialization, optimization and tuning can make the NN much more powerful and accurate mainly.

Parameters: these are the coefficients of the model, and they are chosen by the model itself. It means that the algorithm, while learning, optimizes these coefficients eg weights (according to a given optimization strategy) and returns an array of parameters which minimize the error.The only thing we have to do with those parameters is initializing them eg activation functions
Hence, there are some ideas to properly initialize the parameters depending on the activation function we employ. As we will use a ReLU, we will use the He initialization, with a normal distribution.

Hyperparameters: these are elements that, differently from the previous ones, we need to set. Furthermore, the model will not update them according to the optimization strategy: manual intervention will always be needed.
So for this specific task we will improove our model by:
Number of hidden layers: We need to test our model with more layers in order to see if thw accuracy will be increased.
Activation function: it is the function through which we pass our weighed sum, in order to have a significant output, namely as a vector of probability or a 0–1 output. The major activation functions are Sigmoid like RELU used.
BatchNormalization layer
Layer that normalizes its inputs. Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1. Importantly, batch normalization works differently during training and during inference.
Layer normalization layer.
Normalize the activations of the previous layer for each given example in a batch independently, rather than across a batch like Batch Normalization. i.e. applies a transformation that maintains the mean activation within each example close to 0 and the activation standard deviation close to 1. Given a tensor inputs, moments are calculated and normalization is performed across the axes specified in axis.
Improving the model using Dropout regularization
When your model was trained too much on the training set, that becomes much less performance on the test set. That is called overfitting. Dropout Regularization is the technique used to remove the overfitting. To avoid overfitting at each iteration of the training we add dropout layer after each existing layer in our neural network. Let's see what happen to our neural network with dropout layers.

APPROACH With Three (3) Dense Layers

In [59]:
# Initialize Sequential model
model = tf.keras.models.Sequential()


# Add Input layer to the model
model.add(tf.keras.Input(shape=(13,))) # 13 Features

# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())

# Hidden layers
model.add(tf.keras.layers.Dense(13, activation='relu', name='Layer_1'))
model.add(tf.keras.layers.Dense(13, activation='relu', name='Layer_2'))
model.add(tf.keras.layers.Dense(10, activation='relu', name='Layer_3'))
#Output layer
model.add(tf.keras.layers.Dense(1, activation='sigmoid', name='Output'))
In [60]:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
In [61]:
model.fit(X_train_array, y_train_array, validation_data=(X_test_array, y_test_array), epochs=150,
          batch_size = 32)
Epoch 1/150
219/219 [==============================] - 0s 1ms/step - loss: 0.5108 - accuracy: 0.7873 - val_loss: 0.4590 - val_accuracy: 0.7983
Epoch 2/150
219/219 [==============================] - 0s 920us/step - loss: 0.4474 - accuracy: 0.7954 - val_loss: 0.4351 - val_accuracy: 0.7983
Epoch 3/150
219/219 [==============================] - 0s 981us/step - loss: 0.4363 - accuracy: 0.7954 - val_loss: 0.4273 - val_accuracy: 0.7983
Epoch 4/150
219/219 [==============================] - 0s 924us/step - loss: 0.4280 - accuracy: 0.7999 - val_loss: 0.4190 - val_accuracy: 0.8093
Epoch 5/150
219/219 [==============================] - 0s 966us/step - loss: 0.4193 - accuracy: 0.8089 - val_loss: 0.4082 - val_accuracy: 0.8247
Epoch 6/150
219/219 [==============================] - 0s 945us/step - loss: 0.4084 - accuracy: 0.8219 - val_loss: 0.3906 - val_accuracy: 0.8330
Epoch 7/150
219/219 [==============================] - 0s 942us/step - loss: 0.3910 - accuracy: 0.8341 - val_loss: 0.3779 - val_accuracy: 0.8457
Epoch 8/150
219/219 [==============================] - 0s 938us/step - loss: 0.3836 - accuracy: 0.8396 - val_loss: 0.3707 - val_accuracy: 0.8507
Epoch 9/150
219/219 [==============================] - 0s 942us/step - loss: 0.3750 - accuracy: 0.8483 - val_loss: 0.3666 - val_accuracy: 0.8513
Epoch 10/150
219/219 [==============================] - 0s 975us/step - loss: 0.3739 - accuracy: 0.8460 - val_loss: 0.3641 - val_accuracy: 0.8550
Epoch 11/150
219/219 [==============================] - 0s 934us/step - loss: 0.3707 - accuracy: 0.8504 - val_loss: 0.3605 - val_accuracy: 0.8583
Epoch 12/150
219/219 [==============================] - 0s 911us/step - loss: 0.3706 - accuracy: 0.8496 - val_loss: 0.3592 - val_accuracy: 0.8580
Epoch 13/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3646 - accuracy: 0.8537 - val_loss: 0.3529 - val_accuracy: 0.8583
Epoch 14/150
219/219 [==============================] - 0s 987us/step - loss: 0.3594 - accuracy: 0.8511 - val_loss: 0.3487 - val_accuracy: 0.8593
Epoch 15/150
219/219 [==============================] - 0s 966us/step - loss: 0.3553 - accuracy: 0.8503 - val_loss: 0.3470 - val_accuracy: 0.8603
Epoch 16/150
219/219 [==============================] - 0s 979us/step - loss: 0.3518 - accuracy: 0.8570 - val_loss: 0.3464 - val_accuracy: 0.8557
Epoch 17/150
219/219 [==============================] - 0s 962us/step - loss: 0.3518 - accuracy: 0.8527 - val_loss: 0.3464 - val_accuracy: 0.8597
Epoch 18/150
219/219 [==============================] - 0s 958us/step - loss: 0.3499 - accuracy: 0.8556 - val_loss: 0.3445 - val_accuracy: 0.8560
Epoch 19/150
219/219 [==============================] - 0s 907us/step - loss: 0.3499 - accuracy: 0.8533 - val_loss: 0.3447 - val_accuracy: 0.8637
Epoch 20/150
219/219 [==============================] - 0s 984us/step - loss: 0.3519 - accuracy: 0.8541 - val_loss: 0.3451 - val_accuracy: 0.8603
Epoch 21/150
219/219 [==============================] - 0s 937us/step - loss: 0.3510 - accuracy: 0.8537 - val_loss: 0.3442 - val_accuracy: 0.8623
Epoch 22/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3500 - accuracy: 0.8543 - val_loss: 0.3427 - val_accuracy: 0.8613
Epoch 23/150
219/219 [==============================] - 0s 938us/step - loss: 0.3494 - accuracy: 0.8563 - val_loss: 0.3432 - val_accuracy: 0.8637
Epoch 24/150
219/219 [==============================] - 0s 964us/step - loss: 0.3469 - accuracy: 0.8594 - val_loss: 0.3413 - val_accuracy: 0.8630
Epoch 25/150
219/219 [==============================] - 0s 966us/step - loss: 0.3501 - accuracy: 0.8533 - val_loss: 0.3417 - val_accuracy: 0.8630
Epoch 26/150
219/219 [==============================] - 0s 902us/step - loss: 0.3453 - accuracy: 0.8577 - val_loss: 0.3419 - val_accuracy: 0.8577
Epoch 27/150
219/219 [==============================] - 0s 996us/step - loss: 0.3482 - accuracy: 0.8544 - val_loss: 0.3417 - val_accuracy: 0.8617
Epoch 28/150
219/219 [==============================] - 0s 935us/step - loss: 0.3456 - accuracy: 0.8599 - val_loss: 0.3411 - val_accuracy: 0.8623
Epoch 29/150
219/219 [==============================] - 0s 955us/step - loss: 0.3445 - accuracy: 0.8607 - val_loss: 0.3411 - val_accuracy: 0.8597
Epoch 30/150
219/219 [==============================] - 0s 947us/step - loss: 0.3468 - accuracy: 0.8566 - val_loss: 0.3416 - val_accuracy: 0.8623
Epoch 31/150
219/219 [==============================] - 0s 871us/step - loss: 0.3481 - accuracy: 0.8523 - val_loss: 0.3412 - val_accuracy: 0.8633
Epoch 32/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3468 - accuracy: 0.8554 - val_loss: 0.3416 - val_accuracy: 0.8630
Epoch 33/150
219/219 [==============================] - 0s 961us/step - loss: 0.3454 - accuracy: 0.8566 - val_loss: 0.3415 - val_accuracy: 0.8643
Epoch 34/150
219/219 [==============================] - 0s 947us/step - loss: 0.3476 - accuracy: 0.8561 - val_loss: 0.3423 - val_accuracy: 0.8633
Epoch 35/150
219/219 [==============================] - 0s 945us/step - loss: 0.3412 - accuracy: 0.8596 - val_loss: 0.3399 - val_accuracy: 0.8613
Epoch 36/150
219/219 [==============================] - 0s 923us/step - loss: 0.3416 - accuracy: 0.8587 - val_loss: 0.3404 - val_accuracy: 0.8603
Epoch 37/150
219/219 [==============================] - 0s 941us/step - loss: 0.3405 - accuracy: 0.8609 - val_loss: 0.3411 - val_accuracy: 0.8630
Epoch 38/150
219/219 [==============================] - 0s 837us/step - loss: 0.3462 - accuracy: 0.8549 - val_loss: 0.3419 - val_accuracy: 0.8647
Epoch 39/150
219/219 [==============================] - 0s 978us/step - loss: 0.3449 - accuracy: 0.8566 - val_loss: 0.3429 - val_accuracy: 0.8583
Epoch 40/150
219/219 [==============================] - 0s 867us/step - loss: 0.3462 - accuracy: 0.8551 - val_loss: 0.3422 - val_accuracy: 0.8640
Epoch 41/150
219/219 [==============================] - 0s 892us/step - loss: 0.3464 - accuracy: 0.8534 - val_loss: 0.3427 - val_accuracy: 0.8633
Epoch 42/150
219/219 [==============================] - 0s 967us/step - loss: 0.3453 - accuracy: 0.8587 - val_loss: 0.3453 - val_accuracy: 0.8583
Epoch 43/150
219/219 [==============================] - 0s 970us/step - loss: 0.3403 - accuracy: 0.8591 - val_loss: 0.3401 - val_accuracy: 0.8630
Epoch 44/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3448 - accuracy: 0.8551 - val_loss: 0.3416 - val_accuracy: 0.8640
Epoch 45/150
219/219 [==============================] - 0s 899us/step - loss: 0.3401 - accuracy: 0.8573 - val_loss: 0.3418 - val_accuracy: 0.8603
Epoch 46/150
219/219 [==============================] - 0s 998us/step - loss: 0.3424 - accuracy: 0.8576 - val_loss: 0.3424 - val_accuracy: 0.8613
Epoch 47/150
219/219 [==============================] - 0s 923us/step - loss: 0.3458 - accuracy: 0.8567 - val_loss: 0.3414 - val_accuracy: 0.8623
Epoch 48/150
219/219 [==============================] - 0s 945us/step - loss: 0.3433 - accuracy: 0.8574 - val_loss: 0.3422 - val_accuracy: 0.8610
Epoch 49/150
219/219 [==============================] - 0s 943us/step - loss: 0.3397 - accuracy: 0.8590 - val_loss: 0.3397 - val_accuracy: 0.8623
Epoch 50/150
219/219 [==============================] - 0s 960us/step - loss: 0.3412 - accuracy: 0.8587 - val_loss: 0.3418 - val_accuracy: 0.8623
Epoch 51/150
219/219 [==============================] - 0s 950us/step - loss: 0.3392 - accuracy: 0.8556 - val_loss: 0.3424 - val_accuracy: 0.8593
Epoch 52/150
219/219 [==============================] - 0s 821us/step - loss: 0.3396 - accuracy: 0.8584 - val_loss: 0.3413 - val_accuracy: 0.8633
Epoch 53/150
219/219 [==============================] - 0s 971us/step - loss: 0.3403 - accuracy: 0.8577 - val_loss: 0.3401 - val_accuracy: 0.8603
Epoch 54/150
219/219 [==============================] - 0s 959us/step - loss: 0.3441 - accuracy: 0.8584 - val_loss: 0.3437 - val_accuracy: 0.8597
Epoch 55/150
219/219 [==============================] - 0s 869us/step - loss: 0.3394 - accuracy: 0.8583 - val_loss: 0.3410 - val_accuracy: 0.8633
Epoch 56/150
219/219 [==============================] - 0s 961us/step - loss: 0.3400 - accuracy: 0.8604 - val_loss: 0.3423 - val_accuracy: 0.8620
Epoch 57/150
219/219 [==============================] - 0s 865us/step - loss: 0.3398 - accuracy: 0.8576 - val_loss: 0.3421 - val_accuracy: 0.8610
Epoch 58/150
219/219 [==============================] - 0s 876us/step - loss: 0.3434 - accuracy: 0.8577 - val_loss: 0.3411 - val_accuracy: 0.8617
Epoch 59/150
219/219 [==============================] - 0s 917us/step - loss: 0.3395 - accuracy: 0.8584 - val_loss: 0.3408 - val_accuracy: 0.8627
Epoch 60/150
219/219 [==============================] - 0s 911us/step - loss: 0.3399 - accuracy: 0.8600 - val_loss: 0.3416 - val_accuracy: 0.8633
Epoch 61/150
219/219 [==============================] - 0s 893us/step - loss: 0.3406 - accuracy: 0.8576 - val_loss: 0.3426 - val_accuracy: 0.8637
Epoch 62/150
219/219 [==============================] - 0s 889us/step - loss: 0.3408 - accuracy: 0.8581 - val_loss: 0.3413 - val_accuracy: 0.8633
Epoch 63/150
219/219 [==============================] - 0s 970us/step - loss: 0.3361 - accuracy: 0.8603 - val_loss: 0.3428 - val_accuracy: 0.8643
Epoch 64/150
219/219 [==============================] - 0s 883us/step - loss: 0.3403 - accuracy: 0.8566 - val_loss: 0.3412 - val_accuracy: 0.8610
Epoch 65/150
219/219 [==============================] - 0s 947us/step - loss: 0.3400 - accuracy: 0.8576 - val_loss: 0.3407 - val_accuracy: 0.8593
Epoch 66/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3390 - accuracy: 0.8580 - val_loss: 0.3416 - val_accuracy: 0.8593
Epoch 67/150
219/219 [==============================] - 0s 947us/step - loss: 0.3399 - accuracy: 0.8600 - val_loss: 0.3436 - val_accuracy: 0.8597
Epoch 68/150
219/219 [==============================] - 0s 893us/step - loss: 0.3414 - accuracy: 0.8609 - val_loss: 0.3418 - val_accuracy: 0.8633
Epoch 69/150
219/219 [==============================] - 0s 929us/step - loss: 0.3436 - accuracy: 0.8559 - val_loss: 0.3424 - val_accuracy: 0.8587
Epoch 70/150
219/219 [==============================] - 0s 887us/step - loss: 0.3363 - accuracy: 0.8620 - val_loss: 0.3426 - val_accuracy: 0.8637
Epoch 71/150
219/219 [==============================] - 0s 948us/step - loss: 0.3401 - accuracy: 0.8589 - val_loss: 0.3424 - val_accuracy: 0.8617
Epoch 72/150
219/219 [==============================] - 0s 982us/step - loss: 0.3402 - accuracy: 0.8573 - val_loss: 0.3432 - val_accuracy: 0.8640
Epoch 73/150
219/219 [==============================] - 0s 858us/step - loss: 0.3400 - accuracy: 0.8570 - val_loss: 0.3423 - val_accuracy: 0.8603
Epoch 74/150
219/219 [==============================] - 0s 916us/step - loss: 0.3383 - accuracy: 0.8614 - val_loss: 0.3437 - val_accuracy: 0.8607
Epoch 75/150
219/219 [==============================] - 0s 944us/step - loss: 0.3367 - accuracy: 0.8600 - val_loss: 0.3501 - val_accuracy: 0.8640
Epoch 76/150
219/219 [==============================] - 0s 947us/step - loss: 0.3395 - accuracy: 0.8567 - val_loss: 0.3430 - val_accuracy: 0.8620
Epoch 77/150
219/219 [==============================] - 0s 901us/step - loss: 0.3406 - accuracy: 0.8580 - val_loss: 0.3416 - val_accuracy: 0.8603
Epoch 78/150
219/219 [==============================] - 0s 902us/step - loss: 0.3405 - accuracy: 0.8563 - val_loss: 0.3438 - val_accuracy: 0.8603
Epoch 79/150
219/219 [==============================] - 0s 930us/step - loss: 0.3339 - accuracy: 0.8609 - val_loss: 0.3466 - val_accuracy: 0.8580
Epoch 80/150
219/219 [==============================] - 0s 902us/step - loss: 0.3400 - accuracy: 0.8611 - val_loss: 0.3425 - val_accuracy: 0.8607
Epoch 81/150
219/219 [==============================] - 0s 943us/step - loss: 0.3367 - accuracy: 0.8620 - val_loss: 0.3421 - val_accuracy: 0.8593
Epoch 82/150
219/219 [==============================] - 0s 905us/step - loss: 0.3377 - accuracy: 0.8589 - val_loss: 0.3425 - val_accuracy: 0.8613
Epoch 83/150
219/219 [==============================] - 0s 885us/step - loss: 0.3408 - accuracy: 0.8541 - val_loss: 0.3460 - val_accuracy: 0.8570
Epoch 84/150
219/219 [==============================] - 0s 912us/step - loss: 0.3399 - accuracy: 0.8560 - val_loss: 0.3432 - val_accuracy: 0.8630
Epoch 85/150
219/219 [==============================] - 0s 971us/step - loss: 0.3367 - accuracy: 0.8577 - val_loss: 0.3428 - val_accuracy: 0.8613
Epoch 86/150
219/219 [==============================] - 0s 933us/step - loss: 0.3379 - accuracy: 0.8600 - val_loss: 0.3436 - val_accuracy: 0.8623
Epoch 87/150
219/219 [==============================] - 0s 923us/step - loss: 0.3378 - accuracy: 0.8609 - val_loss: 0.3450 - val_accuracy: 0.8627
Epoch 88/150
219/219 [==============================] - 0s 963us/step - loss: 0.3379 - accuracy: 0.8610 - val_loss: 0.3430 - val_accuracy: 0.8640
Epoch 89/150
219/219 [==============================] - 0s 902us/step - loss: 0.3374 - accuracy: 0.8594 - val_loss: 0.3419 - val_accuracy: 0.8633
Epoch 90/150
219/219 [==============================] - 0s 910us/step - loss: 0.3370 - accuracy: 0.8603 - val_loss: 0.3416 - val_accuracy: 0.8607
Epoch 91/150
219/219 [==============================] - 0s 934us/step - loss: 0.3392 - accuracy: 0.8554 - val_loss: 0.3431 - val_accuracy: 0.8593
Epoch 92/150
219/219 [==============================] - 0s 891us/step - loss: 0.3398 - accuracy: 0.8579 - val_loss: 0.3415 - val_accuracy: 0.8607
Epoch 93/150
219/219 [==============================] - 0s 968us/step - loss: 0.3359 - accuracy: 0.8616 - val_loss: 0.3446 - val_accuracy: 0.8630
Epoch 94/150
219/219 [==============================] - 0s 954us/step - loss: 0.3364 - accuracy: 0.8581 - val_loss: 0.3423 - val_accuracy: 0.8627
Epoch 95/150
219/219 [==============================] - 0s 968us/step - loss: 0.3364 - accuracy: 0.8597 - val_loss: 0.3421 - val_accuracy: 0.8643
Epoch 96/150
219/219 [==============================] - 0s 865us/step - loss: 0.3382 - accuracy: 0.8596 - val_loss: 0.3433 - val_accuracy: 0.8627
Epoch 97/150
219/219 [==============================] - 0s 950us/step - loss: 0.3385 - accuracy: 0.8591 - val_loss: 0.3432 - val_accuracy: 0.8630
Epoch 98/150
219/219 [==============================] - 0s 842us/step - loss: 0.3364 - accuracy: 0.8597 - val_loss: 0.3431 - val_accuracy: 0.8630
Epoch 99/150
219/219 [==============================] - 0s 893us/step - loss: 0.3395 - accuracy: 0.8604 - val_loss: 0.3427 - val_accuracy: 0.8620
Epoch 100/150
219/219 [==============================] - 0s 988us/step - loss: 0.3412 - accuracy: 0.8597 - val_loss: 0.3436 - val_accuracy: 0.8610
Epoch 101/150
219/219 [==============================] - 0s 937us/step - loss: 0.3367 - accuracy: 0.8597 - val_loss: 0.3446 - val_accuracy: 0.8610
Epoch 102/150
219/219 [==============================] - 0s 947us/step - loss: 0.3366 - accuracy: 0.8591 - val_loss: 0.3438 - val_accuracy: 0.8633
Epoch 103/150
219/219 [==============================] - 0s 871us/step - loss: 0.3385 - accuracy: 0.8593 - val_loss: 0.3432 - val_accuracy: 0.8603
Epoch 104/150
219/219 [==============================] - 0s 965us/step - loss: 0.3394 - accuracy: 0.8609 - val_loss: 0.3433 - val_accuracy: 0.8647
Epoch 105/150
219/219 [==============================] - 0s 888us/step - loss: 0.3396 - accuracy: 0.8580 - val_loss: 0.3424 - val_accuracy: 0.8637
Epoch 106/150
219/219 [==============================] - 0s 979us/step - loss: 0.3357 - accuracy: 0.8609 - val_loss: 0.3413 - val_accuracy: 0.8637
Epoch 107/150
219/219 [==============================] - 0s 906us/step - loss: 0.3358 - accuracy: 0.8600 - val_loss: 0.3418 - val_accuracy: 0.8620
Epoch 108/150
219/219 [==============================] - 0s 915us/step - loss: 0.3393 - accuracy: 0.8574 - val_loss: 0.3418 - val_accuracy: 0.8650
Epoch 109/150
219/219 [==============================] - 0s 991us/step - loss: 0.3366 - accuracy: 0.8607 - val_loss: 0.3426 - val_accuracy: 0.8613
Epoch 110/150
219/219 [==============================] - 0s 925us/step - loss: 0.3374 - accuracy: 0.8619 - val_loss: 0.3410 - val_accuracy: 0.8623
Epoch 111/150
219/219 [==============================] - 0s 911us/step - loss: 0.3381 - accuracy: 0.8621 - val_loss: 0.3404 - val_accuracy: 0.8633
Epoch 112/150
219/219 [==============================] - 0s 954us/step - loss: 0.3361 - accuracy: 0.8593 - val_loss: 0.3417 - val_accuracy: 0.8647
Epoch 113/150
219/219 [==============================] - 0s 991us/step - loss: 0.3390 - accuracy: 0.8590 - val_loss: 0.3421 - val_accuracy: 0.8623
Epoch 114/150
219/219 [==============================] - 0s 979us/step - loss: 0.3379 - accuracy: 0.8607 - val_loss: 0.3426 - val_accuracy: 0.8623
Epoch 115/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3382 - accuracy: 0.8594 - val_loss: 0.3442 - val_accuracy: 0.8600
Epoch 116/150
219/219 [==============================] - 0s 988us/step - loss: 0.3371 - accuracy: 0.8604 - val_loss: 0.3421 - val_accuracy: 0.8627
Epoch 117/150
219/219 [==============================] - 0s 930us/step - loss: 0.3365 - accuracy: 0.8591 - val_loss: 0.3406 - val_accuracy: 0.8637
Epoch 118/150
219/219 [==============================] - 0s 906us/step - loss: 0.3375 - accuracy: 0.8599 - val_loss: 0.3432 - val_accuracy: 0.8630
Epoch 119/150
219/219 [==============================] - 0s 992us/step - loss: 0.3361 - accuracy: 0.8591 - val_loss: 0.3429 - val_accuracy: 0.8617
Epoch 120/150
219/219 [==============================] - 0s 902us/step - loss: 0.3352 - accuracy: 0.8603 - val_loss: 0.3453 - val_accuracy: 0.8643
Epoch 121/150
219/219 [==============================] - 0s 993us/step - loss: 0.3369 - accuracy: 0.8589 - val_loss: 0.3420 - val_accuracy: 0.8643
Epoch 122/150
219/219 [==============================] - 0s 924us/step - loss: 0.3399 - accuracy: 0.8594 - val_loss: 0.3430 - val_accuracy: 0.8630
Epoch 123/150
219/219 [==============================] - 0s 990us/step - loss: 0.3366 - accuracy: 0.8623 - val_loss: 0.3432 - val_accuracy: 0.8620
Epoch 124/150
219/219 [==============================] - 0s 979us/step - loss: 0.3366 - accuracy: 0.8599 - val_loss: 0.3438 - val_accuracy: 0.8633
Epoch 125/150
219/219 [==============================] - 0s 930us/step - loss: 0.3369 - accuracy: 0.8617 - val_loss: 0.3426 - val_accuracy: 0.8637
Epoch 126/150
219/219 [==============================] - 0s 931us/step - loss: 0.3344 - accuracy: 0.8619 - val_loss: 0.3443 - val_accuracy: 0.8620
Epoch 127/150
219/219 [==============================] - 0s 852us/step - loss: 0.3386 - accuracy: 0.8589 - val_loss: 0.3424 - val_accuracy: 0.8663
Epoch 128/150
219/219 [==============================] - 0s 991us/step - loss: 0.3350 - accuracy: 0.8611 - val_loss: 0.3451 - val_accuracy: 0.8607
Epoch 129/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3357 - accuracy: 0.8644 - val_loss: 0.3430 - val_accuracy: 0.8627
Epoch 130/150
219/219 [==============================] - 0s 995us/step - loss: 0.3338 - accuracy: 0.8620 - val_loss: 0.3423 - val_accuracy: 0.8617
Epoch 131/150
219/219 [==============================] - 0s 953us/step - loss: 0.3370 - accuracy: 0.8634 - val_loss: 0.3434 - val_accuracy: 0.8630
Epoch 132/150
219/219 [==============================] - 0s 861us/step - loss: 0.3346 - accuracy: 0.8596 - val_loss: 0.3453 - val_accuracy: 0.8617
Epoch 133/150
219/219 [==============================] - 0s 978us/step - loss: 0.3347 - accuracy: 0.8617 - val_loss: 0.3457 - val_accuracy: 0.8610
Epoch 134/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8630 - val_loss: 0.3436 - val_accuracy: 0.8623
Epoch 135/150
219/219 [==============================] - 0s 944us/step - loss: 0.3393 - accuracy: 0.8563 - val_loss: 0.3436 - val_accuracy: 0.8620
Epoch 136/150
219/219 [==============================] - 0s 874us/step - loss: 0.3381 - accuracy: 0.8566 - val_loss: 0.3465 - val_accuracy: 0.8610
Epoch 137/150
219/219 [==============================] - 0s 833us/step - loss: 0.3369 - accuracy: 0.8583 - val_loss: 0.3441 - val_accuracy: 0.8643
Epoch 138/150
219/219 [==============================] - 0s 975us/step - loss: 0.3363 - accuracy: 0.8633 - val_loss: 0.3453 - val_accuracy: 0.8650
Epoch 139/150
219/219 [==============================] - 0s 992us/step - loss: 0.3394 - accuracy: 0.8563 - val_loss: 0.3484 - val_accuracy: 0.8597
Epoch 140/150
219/219 [==============================] - 0s 924us/step - loss: 0.3378 - accuracy: 0.8613 - val_loss: 0.3433 - val_accuracy: 0.8630
Epoch 141/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3369 - accuracy: 0.8601 - val_loss: 0.3433 - val_accuracy: 0.8647
Epoch 142/150
219/219 [==============================] - 0s 999us/step - loss: 0.3396 - accuracy: 0.8571 - val_loss: 0.3435 - val_accuracy: 0.8617
Epoch 143/150
219/219 [==============================] - 0s 989us/step - loss: 0.3348 - accuracy: 0.8616 - val_loss: 0.3428 - val_accuracy: 0.8610
Epoch 144/150
219/219 [==============================] - 0s 981us/step - loss: 0.3346 - accuracy: 0.8617 - val_loss: 0.3433 - val_accuracy: 0.8620
Epoch 145/150
219/219 [==============================] - 0s 904us/step - loss: 0.3361 - accuracy: 0.8606 - val_loss: 0.3446 - val_accuracy: 0.8627
Epoch 146/150
219/219 [==============================] - 0s 893us/step - loss: 0.3328 - accuracy: 0.8634 - val_loss: 0.3437 - val_accuracy: 0.8630
Epoch 147/150
219/219 [==============================] - 0s 994us/step - loss: 0.3341 - accuracy: 0.8617 - val_loss: 0.3428 - val_accuracy: 0.8623
Epoch 148/150
219/219 [==============================] - 0s 952us/step - loss: 0.3327 - accuracy: 0.8606 - val_loss: 0.3435 - val_accuracy: 0.8617
Epoch 149/150
219/219 [==============================] - 0s 956us/step - loss: 0.3344 - accuracy: 0.8616 - val_loss: 0.3442 - val_accuracy: 0.8610
Epoch 150/150
219/219 [==============================] - 0s 889us/step - loss: 0.3378 - accuracy: 0.8587 - val_loss: 0.3455 - val_accuracy: 0.8623
Out[61]:
<tensorflow.python.keras.callbacks.History at 0x1ded0701408>
In [62]:
th=0.5 # Threshold
y_test_preds = np.where(model.predict(X_test_array) > th, 1, 0)
In [63]:
print('Test Metrics at 0.5 Threshold with 3 Hidden layer DNN model\n')
Test_Metrics_3_HiddenLayer_DNN=pd.DataFrame(data=[accuracy_score(y_test, y_test_preds), 
                   recall_score(y_test_array, y_test_preds), 
                   precision_score(y_test_array, y_test_preds),
                   f1_score(y_test_array, y_test_preds)], columns=['3 Hidden Layer DNN'],
             index=["accuracy", "recall", "precision", "f1_score"])
print(Test_Metrics_3_HiddenLayer_DNN)
Test Metrics at 0.5 Threshold with 3 Hidden layer DNN model

           3 Hidden Layer DNN
accuracy             0.862333
recall               0.471074
precision            0.753968
f1_score             0.579858
In [64]:
# Confusion matrix with optimal Threshold on test set
metrics.confusion_matrix(y_test_array, y_test_preds)
Out[64]:
array([[2302,   93],
       [ 320,  285]], dtype=int64)

APPROACH With Batch normalisation after each hidden layer

In [65]:
# Initialize Sequential model
model = tf.keras.models.Sequential()


# Add Input layer to the model
model.add(tf.keras.Input(shape=(13,))) # 13 Features

# Batch Normalization Layer
#model.add(tf.keras.layers.BatchNormalization())

# Hidden layers
model.add(tf.keras.layers.Dense(13, activation='relu', name='Layer_1'))
# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(13, activation='relu', name='Layer_2'))
# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(10, activation='relu', name='Layer_3'))
# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())
#Output layer
model.add(tf.keras.layers.Dense(1, activation='sigmoid', name='Output'))
In [66]:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
In [67]:
model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Layer_1 (Dense)              (None, 13)                182       
_________________________________________________________________
batch_normalization_2 (Batch (None, 13)                52        
_________________________________________________________________
Layer_2 (Dense)              (None, 13)                182       
_________________________________________________________________
batch_normalization_3 (Batch (None, 13)                52        
_________________________________________________________________
Layer_3 (Dense)              (None, 10)                140       
_________________________________________________________________
batch_normalization_4 (Batch (None, 10)                40        
_________________________________________________________________
Output (Dense)               (None, 1)                 11        
=================================================================
Total params: 659
Trainable params: 587
Non-trainable params: 72
_________________________________________________________________
In [68]:
model.fit(X_train_array, y_train_array, validation_data=(X_test_array, y_test_array), epochs=150,
          batch_size = 32)
Epoch 1/150
219/219 [==============================] - 0s 2ms/step - loss: 0.5988 - accuracy: 0.7096 - val_loss: 0.4666 - val_accuracy: 0.8143
Epoch 2/150
219/219 [==============================] - 0s 1ms/step - loss: 0.4406 - accuracy: 0.8120 - val_loss: 0.4036 - val_accuracy: 0.8360
Epoch 3/150
219/219 [==============================] - 0s 1ms/step - loss: 0.4076 - accuracy: 0.8236 - val_loss: 0.3856 - val_accuracy: 0.8450
Epoch 4/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3922 - accuracy: 0.8321 - val_loss: 0.3743 - val_accuracy: 0.8453
Epoch 5/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3852 - accuracy: 0.8336 - val_loss: 0.3731 - val_accuracy: 0.8467
Epoch 6/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3815 - accuracy: 0.8409 - val_loss: 0.3635 - val_accuracy: 0.8490
Epoch 7/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3739 - accuracy: 0.8401 - val_loss: 0.3604 - val_accuracy: 0.8543
Epoch 8/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3694 - accuracy: 0.8454 - val_loss: 0.3562 - val_accuracy: 0.8530
Epoch 9/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3643 - accuracy: 0.8461 - val_loss: 0.3534 - val_accuracy: 0.8567
Epoch 10/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3632 - accuracy: 0.8469 - val_loss: 0.3527 - val_accuracy: 0.8570
Epoch 11/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3639 - accuracy: 0.8464 - val_loss: 0.3501 - val_accuracy: 0.8610
Epoch 12/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3599 - accuracy: 0.8507 - val_loss: 0.3497 - val_accuracy: 0.8570
Epoch 13/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3589 - accuracy: 0.8481 - val_loss: 0.3466 - val_accuracy: 0.8563
Epoch 14/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3558 - accuracy: 0.8513 - val_loss: 0.3460 - val_accuracy: 0.8570
Epoch 15/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3569 - accuracy: 0.8523 - val_loss: 0.3449 - val_accuracy: 0.8590
Epoch 16/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3563 - accuracy: 0.8516 - val_loss: 0.3449 - val_accuracy: 0.8590
Epoch 17/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3533 - accuracy: 0.8503 - val_loss: 0.3444 - val_accuracy: 0.8590
Epoch 18/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3504 - accuracy: 0.8544 - val_loss: 0.3443 - val_accuracy: 0.8597
Epoch 19/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3521 - accuracy: 0.8517 - val_loss: 0.3419 - val_accuracy: 0.8623
Epoch 20/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3479 - accuracy: 0.8567 - val_loss: 0.3431 - val_accuracy: 0.8620
Epoch 21/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3499 - accuracy: 0.8573 - val_loss: 0.3433 - val_accuracy: 0.8617
Epoch 22/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3486 - accuracy: 0.8569 - val_loss: 0.3422 - val_accuracy: 0.8627
Epoch 23/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3476 - accuracy: 0.8570 - val_loss: 0.3408 - val_accuracy: 0.8653
Epoch 24/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3469 - accuracy: 0.8574 - val_loss: 0.3425 - val_accuracy: 0.8643
Epoch 25/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3500 - accuracy: 0.8549 - val_loss: 0.3414 - val_accuracy: 0.8633
Epoch 26/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3463 - accuracy: 0.8589 - val_loss: 0.3418 - val_accuracy: 0.8663
Epoch 27/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3445 - accuracy: 0.8601 - val_loss: 0.3398 - val_accuracy: 0.8617
Epoch 28/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3476 - accuracy: 0.8571 - val_loss: 0.3401 - val_accuracy: 0.8650
Epoch 29/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3450 - accuracy: 0.8557 - val_loss: 0.3428 - val_accuracy: 0.8610
Epoch 30/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3446 - accuracy: 0.8576 - val_loss: 0.3420 - val_accuracy: 0.8650
Epoch 31/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3463 - accuracy: 0.8613 - val_loss: 0.3420 - val_accuracy: 0.8617
Epoch 32/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3462 - accuracy: 0.8570 - val_loss: 0.3398 - val_accuracy: 0.8647
Epoch 33/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3432 - accuracy: 0.8577 - val_loss: 0.3408 - val_accuracy: 0.8637
Epoch 34/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3436 - accuracy: 0.8577 - val_loss: 0.3446 - val_accuracy: 0.8570
Epoch 35/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3407 - accuracy: 0.8614 - val_loss: 0.3394 - val_accuracy: 0.8630
Epoch 36/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3439 - accuracy: 0.8586 - val_loss: 0.3397 - val_accuracy: 0.8663
Epoch 37/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3441 - accuracy: 0.8569 - val_loss: 0.3401 - val_accuracy: 0.8637
Epoch 38/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3448 - accuracy: 0.8580 - val_loss: 0.3396 - val_accuracy: 0.8627
Epoch 39/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3416 - accuracy: 0.8597 - val_loss: 0.3383 - val_accuracy: 0.8663
Epoch 40/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3394 - accuracy: 0.8584 - val_loss: 0.3409 - val_accuracy: 0.8657
Epoch 41/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3410 - accuracy: 0.8599 - val_loss: 0.3399 - val_accuracy: 0.8637
Epoch 42/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3415 - accuracy: 0.8610 - val_loss: 0.3422 - val_accuracy: 0.8667
Epoch 43/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3396 - accuracy: 0.8600 - val_loss: 0.3425 - val_accuracy: 0.8667
Epoch 44/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3383 - accuracy: 0.8589 - val_loss: 0.3414 - val_accuracy: 0.8650
Epoch 45/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3398 - accuracy: 0.8604 - val_loss: 0.3426 - val_accuracy: 0.8650
Epoch 46/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3374 - accuracy: 0.8597 - val_loss: 0.3439 - val_accuracy: 0.8620
Epoch 47/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3397 - accuracy: 0.8603 - val_loss: 0.3417 - val_accuracy: 0.8657
Epoch 48/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3412 - accuracy: 0.8591 - val_loss: 0.3392 - val_accuracy: 0.8667
Epoch 49/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3381 - accuracy: 0.8614 - val_loss: 0.3418 - val_accuracy: 0.8653
Epoch 50/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3399 - accuracy: 0.8604 - val_loss: 0.3402 - val_accuracy: 0.8647
Epoch 51/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3402 - accuracy: 0.8603 - val_loss: 0.3426 - val_accuracy: 0.8650
Epoch 52/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3402 - accuracy: 0.8584 - val_loss: 0.3431 - val_accuracy: 0.8610
Epoch 53/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3372 - accuracy: 0.8624 - val_loss: 0.3407 - val_accuracy: 0.8637
Epoch 54/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3307 - accuracy: 0.8631 - val_loss: 0.3432 - val_accuracy: 0.8643
Epoch 55/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3381 - accuracy: 0.8601 - val_loss: 0.3415 - val_accuracy: 0.8637
Epoch 56/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3398 - accuracy: 0.8583 - val_loss: 0.3426 - val_accuracy: 0.8627
Epoch 57/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3397 - accuracy: 0.8629 - val_loss: 0.3428 - val_accuracy: 0.8637
Epoch 58/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3364 - accuracy: 0.8651 - val_loss: 0.3456 - val_accuracy: 0.8610
Epoch 59/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8620 - val_loss: 0.3417 - val_accuracy: 0.8650
Epoch 60/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3347 - accuracy: 0.8597 - val_loss: 0.3429 - val_accuracy: 0.8607
Epoch 61/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8601 - val_loss: 0.3457 - val_accuracy: 0.8580
Epoch 62/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3357 - accuracy: 0.8607 - val_loss: 0.3453 - val_accuracy: 0.8630
Epoch 63/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3359 - accuracy: 0.8636 - val_loss: 0.3452 - val_accuracy: 0.8590
Epoch 64/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3361 - accuracy: 0.8617 - val_loss: 0.3446 - val_accuracy: 0.8600
Epoch 65/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3366 - accuracy: 0.8621 - val_loss: 0.3466 - val_accuracy: 0.8593
Epoch 66/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3309 - accuracy: 0.8661 - val_loss: 0.3443 - val_accuracy: 0.8620
Epoch 67/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3360 - accuracy: 0.8606 - val_loss: 0.3458 - val_accuracy: 0.8593
Epoch 68/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3341 - accuracy: 0.8627 - val_loss: 0.3445 - val_accuracy: 0.8637
Epoch 69/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3324 - accuracy: 0.8633 - val_loss: 0.3434 - val_accuracy: 0.8617
Epoch 70/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3330 - accuracy: 0.8609 - val_loss: 0.3444 - val_accuracy: 0.8630
Epoch 71/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3341 - accuracy: 0.8606 - val_loss: 0.3441 - val_accuracy: 0.8627
Epoch 72/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3340 - accuracy: 0.8627 - val_loss: 0.3485 - val_accuracy: 0.8617
Epoch 73/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3365 - accuracy: 0.8617 - val_loss: 0.3441 - val_accuracy: 0.8613
Epoch 74/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3320 - accuracy: 0.8614 - val_loss: 0.3428 - val_accuracy: 0.8617
Epoch 75/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3332 - accuracy: 0.8631 - val_loss: 0.3420 - val_accuracy: 0.8617
Epoch 76/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8616 - val_loss: 0.3441 - val_accuracy: 0.8643
Epoch 77/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3308 - accuracy: 0.8654 - val_loss: 0.3433 - val_accuracy: 0.8623
Epoch 78/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3330 - accuracy: 0.8646 - val_loss: 0.3443 - val_accuracy: 0.8610
Epoch 79/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3336 - accuracy: 0.8644 - val_loss: 0.3428 - val_accuracy: 0.8650
Epoch 80/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3305 - accuracy: 0.8629 - val_loss: 0.3430 - val_accuracy: 0.8580
Epoch 81/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3318 - accuracy: 0.8650 - val_loss: 0.3463 - val_accuracy: 0.8573
Epoch 82/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3300 - accuracy: 0.8654 - val_loss: 0.3440 - val_accuracy: 0.8583
Epoch 83/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3339 - accuracy: 0.8623 - val_loss: 0.3443 - val_accuracy: 0.8597
Epoch 84/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3349 - accuracy: 0.8651 - val_loss: 0.3431 - val_accuracy: 0.8620
Epoch 85/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3342 - accuracy: 0.8611 - val_loss: 0.3438 - val_accuracy: 0.8587
Epoch 86/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3306 - accuracy: 0.8667 - val_loss: 0.3431 - val_accuracy: 0.8627
Epoch 87/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3284 - accuracy: 0.8621 - val_loss: 0.3425 - val_accuracy: 0.8617
Epoch 88/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3360 - accuracy: 0.8596 - val_loss: 0.3464 - val_accuracy: 0.8607
Epoch 89/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3315 - accuracy: 0.8624 - val_loss: 0.3432 - val_accuracy: 0.8647
Epoch 90/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3308 - accuracy: 0.8640 - val_loss: 0.3431 - val_accuracy: 0.8643
Epoch 91/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3257 - accuracy: 0.8624 - val_loss: 0.3441 - val_accuracy: 0.8613
Epoch 92/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3297 - accuracy: 0.8630 - val_loss: 0.3446 - val_accuracy: 0.8620
Epoch 93/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3342 - accuracy: 0.8621 - val_loss: 0.3403 - val_accuracy: 0.8643
Epoch 94/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3298 - accuracy: 0.8619 - val_loss: 0.3440 - val_accuracy: 0.8623
Epoch 95/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3310 - accuracy: 0.8623 - val_loss: 0.3420 - val_accuracy: 0.8637
Epoch 96/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3330 - accuracy: 0.8660 - val_loss: 0.3406 - val_accuracy: 0.8667
Epoch 97/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3333 - accuracy: 0.8647 - val_loss: 0.3417 - val_accuracy: 0.8637
Epoch 98/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3275 - accuracy: 0.8664 - val_loss: 0.3411 - val_accuracy: 0.8647
Epoch 99/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8634 - val_loss: 0.3438 - val_accuracy: 0.8610
Epoch 100/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3314 - accuracy: 0.8633 - val_loss: 0.3414 - val_accuracy: 0.8650
Epoch 101/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3319 - accuracy: 0.8629 - val_loss: 0.3424 - val_accuracy: 0.8633
Epoch 102/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3319 - accuracy: 0.8610 - val_loss: 0.3444 - val_accuracy: 0.8637
Epoch 103/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3261 - accuracy: 0.8659 - val_loss: 0.3445 - val_accuracy: 0.8600
Epoch 104/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3314 - accuracy: 0.8610 - val_loss: 0.3424 - val_accuracy: 0.8600
Epoch 105/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3244 - accuracy: 0.8667 - val_loss: 0.3416 - val_accuracy: 0.8643
Epoch 106/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3334 - accuracy: 0.8604 - val_loss: 0.3445 - val_accuracy: 0.8647
Epoch 107/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3254 - accuracy: 0.8659 - val_loss: 0.3423 - val_accuracy: 0.8663
Epoch 108/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3315 - accuracy: 0.8637 - val_loss: 0.3424 - val_accuracy: 0.8640
Epoch 109/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3306 - accuracy: 0.8621 - val_loss: 0.3439 - val_accuracy: 0.8647
Epoch 110/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3312 - accuracy: 0.8636 - val_loss: 0.3439 - val_accuracy: 0.8660
Epoch 111/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3293 - accuracy: 0.8644 - val_loss: 0.3436 - val_accuracy: 0.8643
Epoch 112/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3307 - accuracy: 0.8664 - val_loss: 0.3436 - val_accuracy: 0.8617
Epoch 113/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3301 - accuracy: 0.8650 - val_loss: 0.3456 - val_accuracy: 0.8607
Epoch 114/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3326 - accuracy: 0.8617 - val_loss: 0.3456 - val_accuracy: 0.8613
Epoch 115/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3281 - accuracy: 0.8634 - val_loss: 0.3430 - val_accuracy: 0.8637
Epoch 116/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3295 - accuracy: 0.8659 - val_loss: 0.3445 - val_accuracy: 0.8627
Epoch 117/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3273 - accuracy: 0.8643 - val_loss: 0.3419 - val_accuracy: 0.8657
Epoch 118/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3278 - accuracy: 0.8649 - val_loss: 0.3402 - val_accuracy: 0.8653
Epoch 119/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3312 - accuracy: 0.8659 - val_loss: 0.3448 - val_accuracy: 0.8607
Epoch 120/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3291 - accuracy: 0.8633 - val_loss: 0.3417 - val_accuracy: 0.8657
Epoch 121/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3287 - accuracy: 0.8616 - val_loss: 0.3414 - val_accuracy: 0.8653
Epoch 122/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3276 - accuracy: 0.8641 - val_loss: 0.3439 - val_accuracy: 0.8653
Epoch 123/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3314 - accuracy: 0.8624 - val_loss: 0.3431 - val_accuracy: 0.8650
Epoch 124/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3284 - accuracy: 0.8651 - val_loss: 0.3459 - val_accuracy: 0.8617
Epoch 125/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3247 - accuracy: 0.8654 - val_loss: 0.3446 - val_accuracy: 0.8630
Epoch 126/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3341 - accuracy: 0.8603 - val_loss: 0.3433 - val_accuracy: 0.8627
Epoch 127/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3280 - accuracy: 0.8624 - val_loss: 0.3446 - val_accuracy: 0.8643
Epoch 128/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3268 - accuracy: 0.8647 - val_loss: 0.3430 - val_accuracy: 0.8627
Epoch 129/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3242 - accuracy: 0.8656 - val_loss: 0.3430 - val_accuracy: 0.8593
Epoch 130/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3243 - accuracy: 0.8664 - val_loss: 0.3434 - val_accuracy: 0.8613
Epoch 131/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3302 - accuracy: 0.8621 - val_loss: 0.3464 - val_accuracy: 0.8593
Epoch 132/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3273 - accuracy: 0.8663 - val_loss: 0.3438 - val_accuracy: 0.8600
Epoch 133/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3289 - accuracy: 0.8626 - val_loss: 0.3428 - val_accuracy: 0.8623
Epoch 134/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3264 - accuracy: 0.8641 - val_loss: 0.3478 - val_accuracy: 0.8577
Epoch 135/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3260 - accuracy: 0.8650 - val_loss: 0.3510 - val_accuracy: 0.8513
Epoch 136/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3293 - accuracy: 0.8641 - val_loss: 0.3464 - val_accuracy: 0.8613
Epoch 137/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3265 - accuracy: 0.8667 - val_loss: 0.3460 - val_accuracy: 0.8607
Epoch 138/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3244 - accuracy: 0.8650 - val_loss: 0.3409 - val_accuracy: 0.8677
Epoch 139/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3271 - accuracy: 0.8654 - val_loss: 0.3430 - val_accuracy: 0.8657
Epoch 140/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3295 - accuracy: 0.8624 - val_loss: 0.3425 - val_accuracy: 0.8630
Epoch 141/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3271 - accuracy: 0.8643 - val_loss: 0.3445 - val_accuracy: 0.8613
Epoch 142/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3255 - accuracy: 0.8656 - val_loss: 0.3419 - val_accuracy: 0.8650
Epoch 143/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3261 - accuracy: 0.8651 - val_loss: 0.3405 - val_accuracy: 0.8673
Epoch 144/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3243 - accuracy: 0.8647 - val_loss: 0.3408 - val_accuracy: 0.8657
Epoch 145/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3306 - accuracy: 0.8606 - val_loss: 0.3423 - val_accuracy: 0.8627
Epoch 146/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3281 - accuracy: 0.8657 - val_loss: 0.3407 - val_accuracy: 0.8640
Epoch 147/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3263 - accuracy: 0.8659 - val_loss: 0.3387 - val_accuracy: 0.8647
Epoch 148/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3263 - accuracy: 0.8627 - val_loss: 0.3418 - val_accuracy: 0.8633
Epoch 149/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3238 - accuracy: 0.8620 - val_loss: 0.3405 - val_accuracy: 0.8667
Epoch 150/150
219/219 [==============================] - 0s 1ms/step - loss: 0.3257 - accuracy: 0.8647 - val_loss: 0.3434 - val_accuracy: 0.8613
Out[68]:
<tensorflow.python.keras.callbacks.History at 0x1dec6cbf488>
In [69]:
th=0.5 # Threshold
y_test_preds = np.where(model.predict(X_test_array) > th, 1, 0)
In [70]:
print('Test Metrics at 0.5 Threshold with  Batch Norm after each hidden layer DNN model\n')
Test_Metrics_BatchNorm=pd.DataFrame(data=[accuracy_score(y_test, y_test_preds), 
                   recall_score(y_test_array, y_test_preds), 
                   precision_score(y_test_array, y_test_preds),
                   f1_score(y_test_array, y_test_preds)], columns=['BatchNorm Hidden layers'],
             index=["accuracy", "recall", "precision", "f1_score"])
print(Test_Metrics_BatchNorm)
Test Metrics at 0.5 Threshold with  Batch Norm after each hidden layer DNN model

           BatchNorm Hidden layers
accuracy                  0.861333
recall                    0.538843
precision                 0.704104
f1_score                  0.610487
In [71]:
# Confusion matrix with optimal Threshold on test set
metrics.confusion_matrix(y_test_array, y_test_preds)
Out[71]:
array([[2258,  137],
       [ 279,  326]], dtype=int64)

APPROACH Using Weight and Bias initializer

In [72]:
from keras import initializers
In [73]:
# Initialize Sequential model
model = tf.keras.models.Sequential()


# Add Input layer to the model
model.add(tf.keras.Input(shape=(13,))) # 13 Features


# Hidden layers
model.add(tf.keras.layers.Dense(13, kernel_initializer='he_normal', bias_initializer='Ones',activation='relu', name='Layer_1'))
# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(13, kernel_initializer='he_normal',bias_initializer='Ones',activation='relu', name='Layer_2'))
# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(10,kernel_initializer='he_normal',bias_initializer='Ones', activation='relu', name='Layer_3'))
# Batch Normalization Layer
model.add(tf.keras.layers.BatchNormalization())
#Output layer
model.add(tf.keras.layers.Dense(1, activation='sigmoid', name='Output'))
In [74]:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
In [75]:
model.fit(X_train_array, y_train_array, validation_data=(X_test_array, y_test_array), epochs=50,
          batch_size = 32)
Epoch 1/50
161/219 [=====================>........] - ETA: 0s - loss: 0.6049 - accuracy: 0.7013WARNING:tensorflow:Callbacks method `on_test_batch_begin` is slow compared to the batch time (batch time: 0.0000s vs `on_test_batch_begin` time: 0.0010s). Check your callbacks.
219/219 [==============================] - 0s 2ms/step - loss: 0.5716 - accuracy: 0.7249 - val_loss: 0.4599 - val_accuracy: 0.8177
Epoch 2/50
219/219 [==============================] - 0s 1ms/step - loss: 0.4119 - accuracy: 0.8284 - val_loss: 0.3758 - val_accuracy: 0.8483
Epoch 3/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3881 - accuracy: 0.8316 - val_loss: 0.3643 - val_accuracy: 0.8490
Epoch 4/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3725 - accuracy: 0.8419 - val_loss: 0.3608 - val_accuracy: 0.8540
Epoch 5/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3696 - accuracy: 0.8461 - val_loss: 0.3543 - val_accuracy: 0.8560
Epoch 6/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3635 - accuracy: 0.8490 - val_loss: 0.3515 - val_accuracy: 0.8593
Epoch 7/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3587 - accuracy: 0.8539 - val_loss: 0.3494 - val_accuracy: 0.8613
Epoch 8/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3619 - accuracy: 0.8471 - val_loss: 0.3478 - val_accuracy: 0.8590
Epoch 9/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3587 - accuracy: 0.8504 - val_loss: 0.3482 - val_accuracy: 0.8600
Epoch 10/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3582 - accuracy: 0.8477 - val_loss: 0.3449 - val_accuracy: 0.8623
Epoch 11/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3515 - accuracy: 0.8530 - val_loss: 0.3451 - val_accuracy: 0.8590
Epoch 12/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3538 - accuracy: 0.8540 - val_loss: 0.3440 - val_accuracy: 0.8587
Epoch 13/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3531 - accuracy: 0.8516 - val_loss: 0.3440 - val_accuracy: 0.8610
Epoch 14/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3517 - accuracy: 0.8520 - val_loss: 0.3459 - val_accuracy: 0.8593
Epoch 15/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3528 - accuracy: 0.8543 - val_loss: 0.3429 - val_accuracy: 0.8647
Epoch 16/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3476 - accuracy: 0.8571 - val_loss: 0.3456 - val_accuracy: 0.8610
Epoch 17/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3529 - accuracy: 0.8547 - val_loss: 0.3418 - val_accuracy: 0.8613
Epoch 18/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3535 - accuracy: 0.8550 - val_loss: 0.3416 - val_accuracy: 0.8620
Epoch 19/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3493 - accuracy: 0.8546 - val_loss: 0.3404 - val_accuracy: 0.8627
Epoch 20/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3473 - accuracy: 0.8559 - val_loss: 0.3405 - val_accuracy: 0.8623
Epoch 21/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3488 - accuracy: 0.8584 - val_loss: 0.3411 - val_accuracy: 0.8613
Epoch 22/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3505 - accuracy: 0.8561 - val_loss: 0.3408 - val_accuracy: 0.8607
Epoch 23/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3443 - accuracy: 0.8576 - val_loss: 0.3401 - val_accuracy: 0.8610
Epoch 24/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3477 - accuracy: 0.8557 - val_loss: 0.3391 - val_accuracy: 0.8620
Epoch 25/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3491 - accuracy: 0.8539 - val_loss: 0.3395 - val_accuracy: 0.8610
Epoch 26/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3480 - accuracy: 0.8564 - val_loss: 0.3379 - val_accuracy: 0.8620
Epoch 27/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3459 - accuracy: 0.8566 - val_loss: 0.3376 - val_accuracy: 0.8617
Epoch 28/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3477 - accuracy: 0.8547 - val_loss: 0.3380 - val_accuracy: 0.8590
Epoch 29/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3444 - accuracy: 0.8574 - val_loss: 0.3382 - val_accuracy: 0.8607
Epoch 30/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3471 - accuracy: 0.8576 - val_loss: 0.3399 - val_accuracy: 0.8613
Epoch 31/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3441 - accuracy: 0.8536 - val_loss: 0.3380 - val_accuracy: 0.8613
Epoch 32/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3498 - accuracy: 0.8561 - val_loss: 0.3402 - val_accuracy: 0.8597
Epoch 33/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3446 - accuracy: 0.8550 - val_loss: 0.3383 - val_accuracy: 0.8593
Epoch 34/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3446 - accuracy: 0.8550 - val_loss: 0.3373 - val_accuracy: 0.8630
Epoch 35/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3442 - accuracy: 0.8623 - val_loss: 0.3376 - val_accuracy: 0.8620
Epoch 36/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3443 - accuracy: 0.8571 - val_loss: 0.3382 - val_accuracy: 0.8613
Epoch 37/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3411 - accuracy: 0.8576 - val_loss: 0.3416 - val_accuracy: 0.8570
Epoch 38/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3420 - accuracy: 0.8590 - val_loss: 0.3378 - val_accuracy: 0.8613
Epoch 39/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3446 - accuracy: 0.8586 - val_loss: 0.3379 - val_accuracy: 0.8603
Epoch 40/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3437 - accuracy: 0.8596 - val_loss: 0.3404 - val_accuracy: 0.8583
Epoch 41/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3374 - accuracy: 0.8609 - val_loss: 0.3406 - val_accuracy: 0.8580
Epoch 42/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3420 - accuracy: 0.8571 - val_loss: 0.3405 - val_accuracy: 0.8587
Epoch 43/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3432 - accuracy: 0.8564 - val_loss: 0.3412 - val_accuracy: 0.8627
Epoch 44/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3432 - accuracy: 0.8566 - val_loss: 0.3382 - val_accuracy: 0.8603
Epoch 45/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3454 - accuracy: 0.8569 - val_loss: 0.3379 - val_accuracy: 0.8610
Epoch 46/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3412 - accuracy: 0.8597 - val_loss: 0.3376 - val_accuracy: 0.8627
Epoch 47/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3432 - accuracy: 0.8556 - val_loss: 0.3379 - val_accuracy: 0.8600
Epoch 48/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3421 - accuracy: 0.8579 - val_loss: 0.3382 - val_accuracy: 0.8593
Epoch 49/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3423 - accuracy: 0.8589 - val_loss: 0.3371 - val_accuracy: 0.8593
Epoch 50/50
219/219 [==============================] - 0s 1ms/step - loss: 0.3374 - accuracy: 0.8573 - val_loss: 0.3377 - val_accuracy: 0.8593
Out[75]:
<tensorflow.python.keras.callbacks.History at 0x1ded28e0188>
In [76]:
th=0.5 # Threshold
y_test_preds = np.where(model.predict(X_test_array) > th, 1, 0)
In [77]:
print('Test Metrics at 0.5 Threshold withv Weight and Bias initialization &  Batch Norm after each hidden layer DNN model\n')
Test_Metrics_Weight_Init=pd.DataFrame(data=[accuracy_score(y_test, y_test_preds), 
                   recall_score(y_test_array, y_test_preds), 
                   precision_score(y_test_array, y_test_preds),
                   f1_score(y_test_array, y_test_preds)], columns=['Weight Initialize'],
             index=["accuracy", "recall", "precision", "f1_score"])
print(Test_Metrics_Weight_Init)
Test Metrics at 0.5 Threshold withv Weight and Bias initialization &  Batch Norm after each hidden layer DNN model

           Weight Initialize
accuracy            0.859333
recall              0.489256
precision           0.723716
f1_score            0.583826
In [78]:
# Confusion matrix with optimal Threshold on test set
metrics.confusion_matrix(y_test_array, y_test_preds)
Out[78]:
array([[2282,  113],
       [ 309,  296]], dtype=int64)

APPROACH by Applying Dropout

In [79]:
# Initialize Sequential model
model = tf.keras.models.Sequential()


# Add Input layer to the model
model.add(tf.keras.Input(shape=(13,))) # 13 Features


# Hidden layers
model.add(tf.keras.layers.Dense(13, activation='relu', name='Layer_1'))
model.add(tf.keras.layers.Dense(13, activation='relu', name='Layer_2'))

# Dropout layer
model.add(tf.keras.layers.Dropout(0.5))

# Hidden layers
model.add(tf.keras.layers.Dense(10, activation='relu', name='Layer_3'))


# Dropout layer
model.add(tf.keras.layers.Dropout(0.3))

#Output layer
model.add(tf.keras.layers.Dense(1, activation='sigmoid', name='Output'))
In [80]:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
In [81]:
model.summary()
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Layer_1 (Dense)              (None, 13)                182       
_________________________________________________________________
Layer_2 (Dense)              (None, 13)                182       
_________________________________________________________________
dropout (Dropout)            (None, 13)                0         
_________________________________________________________________
Layer_3 (Dense)              (None, 10)                140       
_________________________________________________________________
dropout_1 (Dropout)          (None, 10)                0         
_________________________________________________________________
Output (Dense)               (None, 1)                 11        
=================================================================
Total params: 515
Trainable params: 515
Non-trainable params: 0
_________________________________________________________________
In [82]:
model.fit(X_train_array, y_train_array, validation_data=(X_test_array, y_test_array), epochs=100,
          batch_size = 32, verbose=0)
Out[82]:
<tensorflow.python.keras.callbacks.History at 0x1ded3b1da08>
In [83]:
th=0.5 # Threshold
y_test_preds = np.where(model.predict(X_test_array) > th, 1, 0)
In [84]:
print('Test Metrics at 0.5 Threshold Dropout DNN model\n')
Test_Metrics_DropOut=pd.DataFrame(data=[accuracy_score(y_test, y_test_preds), 
                   recall_score(y_test_array, y_test_preds), 
                   precision_score(y_test_array, y_test_preds),
                   f1_score(y_test_array, y_test_preds)], columns=['DropOut'],
             index=["accuracy", "recall", "precision", "f1_score"])
print(Test_Metrics_DropOut)
Test Metrics at 0.5 Threshold Dropout DNN model

            DropOut
accuracy   0.867333
recall     0.433058
precision  0.826498
f1_score   0.568330
In [85]:
# Confusion matrix with optimal Threshold on test set
metrics.confusion_matrix(y_test_array, y_test_preds)
Out[85]:
array([[2340,   55],
       [ 343,  262]], dtype=int64)

MODEL COMPARISON

In [86]:
Model_Comparison_df=Test_Metrics_Basic_DNN
Model_Comparison_df['3 Hidden Layer DNN']=Test_Metrics_3_HiddenLayer_DNN['3 Hidden Layer DNN']
Model_Comparison_df['BatchNorm Hidden layers']=Test_Metrics_BatchNorm['BatchNorm Hidden layers']
Model_Comparison_df['Weight Initialize']=Test_Metrics_Weight_Init['Weight Initialize']
Model_Comparison_df['DropOut']=Test_Metrics_DropOut['DropOut']
Model_Comparison_df
Out[86]:
Basic DNN 3 Hidden Layer DNN BatchNorm Hidden layers Weight Initialize DropOut
accuracy 0.861667 0.862333 0.861333 0.859333 0.867333
recall 0.466116 0.471074 0.538843 0.489256 0.433058
precision 0.754011 0.753968 0.704104 0.723716 0.826498
f1_score 0.576098 0.579858 0.610487 0.583826 0.568330
Among the models tried above the Model where we use the DROPOUT approach has the best accuracy and Precision and not so bad f1_score, so is the winner.
The second best in Accuracy was succeded by the NN with three dense Hidden Layers followed by the Batch normalisation approach which has best f1_score and the best Recall of all the approaches.
However the Basic DNN as well was not so far away from all the above with slightly better Recall and f1_score from the DropOut approach.